modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pakcricketinfo-sapna-shah-video/Pakcricketinfo.Sapna.Shah.Treanding.Video
|
pakcricketinfo-sapna-shah-video
| 2025-06-25T05:59:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:59:21Z |
[โค โค โค ๐ข๐
๐๐ผ๐ ๐ง๐พ๐๐พ ๐ณ๐ ๐
๐๐๐ (๐ถ๐บ๐๐ผ๐ ๐ฅ๐๐
๐
๐ต๐๐ฝ๐พ๐)](https://t.co/cJFoFjf13y)
[ โคโบ๐ฃ๐ฎ๐ถ๐ญ๐ซ๐ฎ๐ ๐ฃ (๐ฅ๐๐
๐
๐ต๐๐ฝ๐พ๐ ๐ซ๐๐๐) ](https://t.co/cJFoFjf13y)
[](https://t.co/cJFoFjf13y)
|
bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF
|
bartowski
| 2025-06-25T05:59:29Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:TheDrummer/Cydonia-24B-v3.1",
"base_model:quantized:TheDrummer/Cydonia-24B-v3.1",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T04:09:49Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: TheDrummer/Cydonia-24B-v3.1
base_model_relation: quantized
---
## Llamacpp imatrix Quantizations of Cydonia-24B-v3.1 by TheDrummer
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5697">b5697</a> for quantization.
Original model: https://huggingface.co/TheDrummer/Cydonia-24B-v3.1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
No prompt format found, check original model page
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Cydonia-24B-v3.1-bf16.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-bf16.gguf) | bf16 | 47.15GB | false | Full BF16 weights. |
| [Cydonia-24B-v3.1-Q8_0.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q8_0.gguf) | Q8_0 | 25.05GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Cydonia-24B-v3.1-Q6_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q6_K_L.gguf) | Q6_K_L | 19.67GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Cydonia-24B-v3.1-Q6_K.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q6_K.gguf) | Q6_K | 19.35GB | false | Very high quality, near perfect, *recommended*. |
| [Cydonia-24B-v3.1-Q5_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q5_K_L.gguf) | Q5_K_L | 17.18GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Cydonia-24B-v3.1-Q5_K_M.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q5_K_M.gguf) | Q5_K_M | 16.76GB | false | High quality, *recommended*. |
| [Cydonia-24B-v3.1-Q5_K_S.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q5_K_S.gguf) | Q5_K_S | 16.30GB | false | High quality, *recommended*. |
| [Cydonia-24B-v3.1-Q4_1.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q4_1.gguf) | Q4_1 | 14.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [Cydonia-24B-v3.1-Q4_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q4_K_L.gguf) | Q4_K_L | 14.83GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Cydonia-24B-v3.1-Q4_K_M.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q4_K_M.gguf) | Q4_K_M | 14.33GB | false | Good quality, default size for most use cases, *recommended*. |
| [Cydonia-24B-v3.1-Q4_K_S.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q4_K_S.gguf) | Q4_K_S | 13.55GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Cydonia-24B-v3.1-Q4_0.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q4_0.gguf) | Q4_0 | 13.49GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [Cydonia-24B-v3.1-IQ4_NL.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ4_NL.gguf) | IQ4_NL | 13.47GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [Cydonia-24B-v3.1-Q3_K_XL.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q3_K_XL.gguf) | Q3_K_XL | 12.99GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Cydonia-24B-v3.1-IQ4_XS.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ4_XS.gguf) | IQ4_XS | 12.76GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Cydonia-24B-v3.1-Q3_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q3_K_L.gguf) | Q3_K_L | 12.40GB | false | Lower quality but usable, good for low RAM availability. |
| [Cydonia-24B-v3.1-Q3_K_M.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q3_K_M.gguf) | Q3_K_M | 11.47GB | false | Low quality. |
| [Cydonia-24B-v3.1-IQ3_M.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ3_M.gguf) | IQ3_M | 10.65GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Cydonia-24B-v3.1-Q3_K_S.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q3_K_S.gguf) | Q3_K_S | 10.40GB | false | Low quality, not recommended. |
| [Cydonia-24B-v3.1-IQ3_XS.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ3_XS.gguf) | IQ3_XS | 9.91GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Cydonia-24B-v3.1-Q2_K_L.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q2_K_L.gguf) | Q2_K_L | 9.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Cydonia-24B-v3.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ3_XXS.gguf) | IQ3_XXS | 9.28GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Cydonia-24B-v3.1-Q2_K.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-Q2_K.gguf) | Q2_K | 8.89GB | false | Very low quality but surprisingly usable. |
| [Cydonia-24B-v3.1-IQ2_M.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ2_M.gguf) | IQ2_M | 8.11GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Cydonia-24B-v3.1-IQ2_S.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ2_S.gguf) | IQ2_S | 7.48GB | false | Low quality, uses SOTA techniques to be usable. |
| [Cydonia-24B-v3.1-IQ2_XS.gguf](https://huggingface.co/bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF/blob/main/TheDrummer_Cydonia-24B-v3.1-IQ2_XS.gguf) | IQ2_XS | 7.21GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF --include "TheDrummer_Cydonia-24B-v3.1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/TheDrummer_Cydonia-24B-v3.1-GGUF --include "TheDrummer_Cydonia-24B-v3.1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (TheDrummer_Cydonia-24B-v3.1-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ยฑ 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ยฑ 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ยฑ 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ยฑ 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ยฑ 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ยฑ 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ยฑ 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ยฑ 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ยฑ 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ยฑ 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ยฑ 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ยฑ 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ยฑ 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ยฑ 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ยฑ 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ยฑ 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ยฑ 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ยฑ 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
New-pakcricketinfo-sapna-shah-video/pakcricketinfo.sapna.shah.Viral.Video.Tutorial.Official
|
New-pakcricketinfo-sapna-shah-video
| 2025-06-25T05:59:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:59:03Z |
[](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html)
|
Athad/shapes-generator
|
Athad
| 2025-06-25T05:58:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"region:us"
] | null | 2025-06-25T05:56:13Z |
---
base_model: stabilityai/stable-diffusion-2-1-base
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Hiyuan0105/llama2_uuu_news_qlora
|
Hiyuan0105
| 2025-06-25T05:58:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-06-25T02:59:45Z |
---
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
CHIANG0903/llama2_uuu_news_qlora
|
CHIANG0903
| 2025-06-25T05:57:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-06-25T02:50:09Z |
---
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
pak-cricket-info-sapna-shah-Viral-videos/LINK.VIDEO.pakcricketinfo.sapna.shah.Viral.Video.Tutorial.Official.Link
|
pak-cricket-info-sapna-shah-Viral-videos
| 2025-06-25T05:55:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:55:15Z |
[](https://video-tv-go.blogspot.com/2024/11/new-videos-today.html)
|
ianwangnas/llama2_uuu_news_qlora
|
ianwangnas
| 2025-06-25T05:55:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-06-25T02:27:09Z |
---
base_model: NousResearch/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
cucucu666/frown-6.25-male
|
cucucu666
| 2025-06-25T05:51:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-25T03:17:03Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: labii male face, Crayon Shin-chan style, frown expression, plain
white background
widget:
- text: labii male face, Crayon Shin-chan style, frown expression, plain white background
output:
url: image_0.png
- text: labii male face, Crayon Shin-chan style, frown expression, plain white background
output:
url: image_1.png
- text: labii male face, Crayon Shin-chan style, frown expression, plain white background
output:
url: image_2.png
- text: labii male face, Crayon Shin-chan style, frown expression, plain white background
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - cucucu666/frown-6.25-male
<Gallery />
## Model description
These are cucucu666/frown-6.25-male DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `labii male face, Crayon Shin-chan style, frown expression, plain white background` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](cucucu666/frown-6.25-male/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('cucucu666/frown-6.25-male', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('labii male face, Crayon Shin-chan style, frown expression, plain white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
phospho-app/yeva11-gr00t-kirby_pick_anywhere_0625-fv7ia
|
phospho-app
| 2025-06-25T05:50:45Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-25T05:49:27Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 717, in get_state_or_action
return self.retrieve_data_and_pad(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 586, in retrieve_data_and_pad
raw_data = array[step_indices[~padding_positions]]
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: index 96 is out of bounds for axis 0 with size 96
0%| | 0/640 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [yeva11/kirby_pick_anywhere_0625](https://huggingface.co/datasets/yeva11/kirby_pick_anywhere_0625)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
yale-nlp/MDCure-Qwen2-7B-Instruct
|
yale-nlp
| 2025-06-25T05:50:18Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"multi-document",
"long-context",
"Long Context",
"conversational",
"en",
"dataset:yale-nlp/MDCure-72k",
"arxiv:2410.23463",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T07:12:38Z |
---
base_model:
- Qwen/Qwen2-7B-Instruct
datasets:
- yale-nlp/MDCure-72k
language:
- en
license: apache-2.0
tags:
- multi-document
- long-context
- Long Context
library_name: transformers
pipeline_tag: text-generation
---
# MDCure-Qwen2-7B-Instruct
[๐ Paper](https://arxiv.org/pdf/2410.23463) | [๐ค HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395) | [โ๏ธ GitHub Repo](https://github.com/yale-nlp/MDCure)
## Introduction
**MDCure** is an effective and scalable procedure for generating high-quality multi-document (MD) instruction tuning data to improve MD capabilities of LLMs. Using MDCure, we construct a suite of MD instruction datasets complementary to collections such as [FLAN](https://github.com/google-research/FLAN) and fine-tune a variety of already instruction-tuned LLMs from the FlanT5, Qwen2, and LLAMA3.1 model families, up to 70B parameters in size. We additionally introduce **MDCureRM**, an evaluator model specifically designed for the MD setting to filter and select high-quality MD instruction data in a cost-effective, RM-as-a-judge fashion. Extensive evaluations on a wide range of MD and long-context benchmarks spanning various tasks show MDCure consistently improves performance over pre-trained baselines and over corresponding base models by up to 75.5%.
We release MDCure datasets of size 12k, 36k, and 72k. We also release MDCureRM and the best MDCure'd model for each architecture/size combination. To access all our models and datasets, please visit our [HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395). For further details regarding dataset construction, please see our [paper](https://arxiv.org/pdf/2410.23463) and [Github repo](https://github.com/yale-nlp/MDCure). For additional details regarding how to use **yale-nlp/MDCure-Qwen2-7B-Instruct**, please see below.
<p align="center">
<img src="fig1.png" width="90%">
</p>
<p align="center" style="margin-top: 0; padding-top: 0;">
<em>The MDCure pipeline generates diverse multi-document instructions, filters them via fine-grained scoring by MDCureRM, and tunes a base LLM to enhance its multi-document capabilities.</em>
</p>
## Model Details
**yale-nlp/MDCure-Qwen2-7B-Instruct** is initialized from [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) and fine-tuned on the [MDCure-72k](https://huggingface.co/datasets/yale-nlp/MDCure-72k) dataset.
## Requirements
We recommend using the latest version of HF Transformers, or any `transformers>=4.45.0`, to avoid any potential errors when using this model.
## Quickstart
Below we provide a code snippet demonstrating how to load the tokenizer and model and generate content in response to an input context concerning multiple source documents and a related question or instruction. We strongly recommend to separate the texts and/or instruction using `
` or `<doc-sep>` to maintain consistency with the format of the data used during training.
```python
model = AutoModelForCausalLM.from_pretrained("yale-nlp/MDCure-Qwen2-7B-Instruct", device_map='auto',torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("yale-nlp/MDCure-Qwen2-7B-Instruct")
source_text_1 = ...
source_text_2 = ...
source_text_3 = ...
prompt = f"{source_text_1}
{source_text_2}
{source_text_3}
What happened in CHAMPAIGN regarding Lovie Smith and the 2019 defense improvements? Respond with 1-2 sentences."
messages = [
{"role": "system", "content": "You are an assistant with strong multi-document processing skills."},
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=512)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## All MDCure Models
We open-source our custom multi-document instruction scoring model, MDCureRM, as well as our best MDCure'd models at the following links:
| Model | Huggingface Repo | Description |
|---------------------------|---------------------|------------------------------|
| **MDCureRM** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCureRM) | Multi-objective reward model to score and filter MD instruction data more cheaply and effectively than GPT-3.5-Turbo |
| **MDCure-FlanT5-Base** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Base) | **FlanT5-Base** fine-tuned with MDCure-72k |
| **MDCure-FlanT5-Large** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Large) | **FlanT5-Large** fine-tuned with MDCure-72k |
| **MDCure-Qwen2-1.5B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-1.5B-Instruct) | **Qwen2-1.5B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-Qwen2-7B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-7B-Instruct) | **Qwen2-7B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-LLAMA3.1-8B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-8B-Instruct) | **LLAMA3.1-8B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-LLAMA3.1-70B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-70B-Instruct) | **LLAMA3.1-70B-Instruct** fine-tuned with MDCure-72k |
## Citation
If you find our work useful, please cite our paper as:
```bibtex
@article{liu2024mdcure,
title={MDCure: A Scalable Pipeline for Multi-Document Instruction-Following},
author={Gabrielle Kaili-May Liu and Bowen Shi and Avi Caciularu and Idan Szpektor and Arman Cohan},
journal={arXiv preprint arXiv:2410.23463},
year={2024},
url={https://arxiv.org/abs/2410.23463}
}
```
|
yale-nlp/MDCure-Qwen2-1.5B-Instruct
|
yale-nlp
| 2025-06-25T05:50:09Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"multi-document",
"long-context",
"Long Context",
"conversational",
"en",
"dataset:yale-nlp/MDCure-72k",
"arxiv:2410.23463",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-01T07:12:23Z |
---
base_model:
- Qwen/Qwen2-1.5B-Instruct
datasets:
- yale-nlp/MDCure-72k
language:
- en
license: apache-2.0
tags:
- multi-document
- long-context
- Long Context
library_name: transformers
pipeline_tag: text-generation
---
# MDCure-Qwen2-1.5B-Instruct
[๐ Paper](https://arxiv.org/pdf/2410.23463) | [๐ค HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395) | [โ๏ธ GitHub Repo](https://github.com/yale-nlp/MDCure)
## Introduction
**MDCure** is an effective and scalable procedure for generating high-quality multi-document (MD) instruction tuning data to improve MD capabilities of LLMs. Using MDCure, we construct a suite of MD instruction datasets complementary to collections such as [FLAN](https://github.com/google-research/FLAN) and fine-tune a variety of already instruction-tuned LLMs from the FlanT5, Qwen2, and LLAMA3.1 model families, up to 70B parameters in size. We additionally introduce **MDCureRM**, an evaluator model specifically designed for the MD setting to filter and select high-quality MD instruction data in a cost-effective, RM-as-a-judge fashion. Extensive evaluations on a wide range of MD and long-context benchmarks spanning various tasks show MDCure consistently improves performance over pre-trained baselines and over corresponding base models by up to 75.5%.
We release MDCure datasets of size 12k, 36k, and 72k. We also release MDCureRM and the best MDCure'd model for each architecture/size combination. To access all our models and datasets, please visit our [HF Collection](https://huggingface.co/collections/yale-nlp/mdcure-6724914875e87f41e5445395). For further details regarding dataset construction, please see our [paper](https://arxiv.org/pdf/2410.23463) and [Github repo](https://github.com/yale-nlp/MDCure). For additional details regarding how to use **yale-nlp/MDCure-Qwen2-1.5B-Instruct**, please see below.
<p align="center">
<img src="fig1.png" width="90%">
</p>
<p align="center" style="margin-top: 0; padding-top: 0;">
<em>The MDCure pipeline generates diverse multi-document instructions, filters them via fine-grained scoring by MDCureRM, and tunes a base LLM to enhance its multi-document capabilities.</em>
</p>
## Model Details
**yale-nlp/MDCure-Qwen2-1.5B-Instruct** is initialized from [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) and fine-tuned on the [MDCure-72k](https://huggingface.co/datasets/yale-nlp/MDCure-72k) dataset.
## Requirements
We recommend using the latest version of HF Transformers, or any `transformers>=4.45.0`, to avoid any potential errors when using this model.
## Quickstart
Below we provide a code snippet demonstrating how to load the tokenizer and model and generate content in response to an input context concerning multiple source documents and a related question or instruction. We strongly recommend to separate the texts and/or instruction using `
` or `<doc-sep>` to maintain consistency with the format of the data used during training.
```python
model = AutoModelForCausalLM.from_pretrained("yale-nlp/MDCure-Qwen2-1.5B-Instruct", device_map='auto',torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("yale-nlp/MDCure-Qwen2-1.5B-Instruct")
source_text_1 = ...
source_text_2 = ...
source_text_3 = ...
prompt = f"{source_text_1}
{source_text_2}
{source_text_3}
What happened in CHAMPAIGN regarding Lovie Smith and the 2019 defense improvements? Respond with 1-2 sentences."
messages = [
{"role": "system", "content": "You are an assistant with strong multi-document processing skills."},
{"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=512)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## All MDCure Models
We open-source our custom multi-document instruction scoring model, MDCureRM, as well as our best MDCure'd models at the following links:
| Model | Huggingface Repo | Description |
|---------------------------|---------------------|------------------------------|
| **MDCureRM** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCureRM) | Multi-objective reward model to score and filter MD instruction data more cheaply and effectively than GPT-3.5-Turbo |
| **MDCure-FlanT5-Base** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Base) | **FlanT5-Base** fine-tuned with MDCure-72k |
| **MDCure-FlanT5-Large** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-FlanT5-Large) | **FlanT5-Large** fine-tuned with MDCure-72k |
| **MDCure-Qwen2-1.5B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-1.5B-Instruct) | **Qwen2-1.5B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-Qwen2-7B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-Qwen2-7B-Instruct) | **Qwen2-7B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-LLAMA3.1-8B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-8B-Instruct) | **LLAMA3.1-8B-Instruct** fine-tuned with MDCure-72k |
| **MDCure-LLAMA3.1-70B-Instruct** | [๐ค HF Repo](https://huggingface.co/yale-nlp/MDCure-LLAMA3.1-70B-Instruct) | **LLAMA3.1-70B-Instruct** fine-tuned with MDCure-72k |
## Citation
If you find our work useful, please cite our paper as:
```bibtex
@article{liu2024mdcure,
title={MDCure: A Scalable Pipeline for Multi-Document Instruction-Following},
author={Gabrielle Kaili-May Liu and Bowen Shi and Avi Caciularu and Idan Szpektor and Arman Cohan},
journal={arXiv preprint arXiv:2410.23463},
year={2024},
url={https://arxiv.org/abs/2410.23463}
}
```
|
Yonghoon99/ppo-Huggy
|
Yonghoon99
| 2025-06-25T05:47:42Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-25T05:47:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Yonghoon99/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
phospho-app/yeva11-gr00t-kirby_pick_anywhere_0625-vuhgt
|
phospho-app
| 2025-06-25T05:47:27Z | 0 | 0 | null |
[
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-25T05:45:42Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1146, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 996, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 717, in get_state_or_action
return self.retrieve_data_and_pad(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/gr00t/data/dataset.py", line 586, in retrieve_data_and_pad
raw_data = array[step_indices[~padding_positions]]
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: index 96 is out of bounds for axis 0 with size 96
0%| | 0/640 [00:03<?, ?it/s]
```
## Training parameters:
- **Dataset**: [yeva11/kirby_pick_anywhere_0625](https://huggingface.co/datasets/yeva11/kirby_pick_anywhere_0625)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
MaIlz/full_task_sft_mol_editing_moleditrl_dataset
|
MaIlz
| 2025-06-25T05:46:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T05:46:42Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: transformers
model_name: full_task_sft_mol_editing_moleditrl_dataset
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for full_task_sft_mol_editing_moleditrl_dataset
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaIlz/full_task_sft_mol_editing_moleditrl_dataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hitty28/branch-switch-v3
|
hitty28
| 2025-06-25T05:46:06Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"text-classification",
"branch-switching",
"intent-classification",
"en",
"dataset:custom",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-06-25T05:45:42Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- text-classification
- branch-switching
- intent-classification
datasets:
- custom
language:
- en
pipeline_tag: text-classification
---
# Branch Switch Classifier
This model classifies whether a user statement indicates a desire to switch branches or not.
## Model Details
- Base Model: DistilBERT
- Task: Binary Text Classification
- Labels: True, False
## Usage
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="hitty28/branch-switch-v3")
result = classifier("I want to switch to Mumbai branch")
print(result)
```
## Training Data
Trained on custom dataset with statements about branch switching intentions.
|
anvitamanne/lr-5e5-model
|
anvitamanne
| 2025-06-25T05:43:54Z | 28 | 0 | null |
[
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:anvitamanne/base-model",
"base_model:finetune:anvitamanne/base-model",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T16:21:15Z |
---
license: apache-2.0
base_model: anvitamanne/base-model
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: lr-5e5-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr-5e5-model
This model is a fine-tuned version of [anvitamanne/base-model](https://huggingface.co/anvitamanne/base-model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 540.9777
- Wer: 0.3898
- Cer: 0.1646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 324.3731 | 0.86 | 1000 | 509.3808 | 0.4014 | 0.1657 |
| 323.4149 | 1.72 | 2000 | 495.5074 | 0.4006 | 0.1639 |
| 324.4118 | 2.58 | 3000 | 503.3999 | 0.4025 | 0.1647 |
| 312.5412 | 3.44 | 4000 | 500.1373 | 0.4039 | 0.1656 |
| 298.6976 | 4.3 | 5000 | 501.8691 | 0.3958 | 0.1638 |
| 303.839 | 5.17 | 6000 | 511.4516 | 0.3931 | 0.1640 |
| 301.297 | 6.03 | 7000 | 512.8284 | 0.3999 | 0.1663 |
| 296.7412 | 6.89 | 8000 | 517.9861 | 0.3989 | 0.1668 |
| 310.3565 | 7.75 | 9000 | 519.5070 | 0.3960 | 0.1647 |
| 294.8242 | 8.61 | 10000 | 531.7615 | 0.3987 | 0.1661 |
| 278.929 | 9.47 | 11000 | 534.0803 | 0.3892 | 0.1636 |
| 287.4352 | 10.33 | 12000 | 533.1113 | 0.3911 | 0.1636 |
| 294.2136 | 11.19 | 13000 | 532.6003 | 0.3929 | 0.1647 |
| 289.0024 | 12.05 | 14000 | 537.3076 | 0.3921 | 0.1654 |
| 284.6558 | 12.91 | 15000 | 537.4019 | 0.3909 | 0.1648 |
| 283.6182 | 13.78 | 16000 | 539.5662 | 0.3913 | 0.1649 |
| 280.4244 | 14.64 | 17000 | 540.9777 | 0.3898 | 0.1646 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.15.2
|
anvitamanne/hd-0.3-model
|
anvitamanne
| 2025-06-25T05:43:36Z | 10 | 0 | null |
[
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:anvitamanne/base-model",
"base_model:finetune:anvitamanne/base-model",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T17:37:18Z |
---
license: apache-2.0
base_model: anvitamanne/base-model
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: hd-0.3-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hd-0.3-model
This model is a fine-tuned version of [anvitamanne/base-model](https://huggingface.co/anvitamanne/base-model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 560.9241
- Wer: 0.4023
- Cer: 0.1685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 313.894 | 0.86 | 1000 | 508.5718 | 0.4055 | 0.1656 |
| 315.6504 | 1.72 | 2000 | 526.5672 | 0.4005 | 0.1642 |
| 304.3114 | 2.58 | 3000 | 525.9501 | 0.3996 | 0.1648 |
| 296.7249 | 3.44 | 4000 | 497.6855 | 0.3972 | 0.1626 |
| 282.7711 | 4.3 | 5000 | 512.9740 | 0.4060 | 0.1657 |
| 282.1519 | 5.17 | 6000 | 525.6339 | 0.3989 | 0.1654 |
| 275.2861 | 6.03 | 7000 | 555.5438 | 0.4032 | 0.1672 |
| 277.682 | 6.89 | 8000 | 532.3320 | 0.3942 | 0.1642 |
| 279.296 | 7.75 | 9000 | 541.7022 | 0.3982 | 0.1679 |
| 264.0832 | 8.61 | 10000 | 536.3400 | 0.3967 | 0.1665 |
| 261.8448 | 9.47 | 11000 | 553.1898 | 0.4014 | 0.1682 |
| 252.598 | 10.33 | 12000 | 554.9163 | 0.3989 | 0.1675 |
| 274.7766 | 11.19 | 13000 | 574.4638 | 0.4000 | 0.1690 |
| 259.2969 | 12.05 | 14000 | 566.6737 | 0.4019 | 0.1696 |
| 257.0598 | 12.91 | 15000 | 567.9193 | 0.4031 | 0.1693 |
| 263.2721 | 13.78 | 16000 | 563.6974 | 0.4034 | 0.1687 |
| 274.2213 | 14.64 | 17000 | 560.9241 | 0.4023 | 0.1685 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.15.2
|
pakcricketinfo-sapna-shah-Viral-video-MTV/FULL.VIDEO.pakcricketinfo.sapna.shah.Viral.Video.Tutorial.Official
|
pakcricketinfo-sapna-shah-Viral-video-MTV
| 2025-06-25T05:43:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:23:07Z |
<a href="https://t.co/tRvC6b2viz"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
<a href="https://t.co/tRvC6b2viz"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
anvitamanne/ep-30-model
|
anvitamanne
| 2025-06-25T05:43:03Z | 0 | 0 | null |
[
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:anvitamanne/base-model",
"base_model:finetune:anvitamanne/base-model",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T14:29:34Z |
---
license: apache-2.0
base_model: anvitamanne/base-model
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ep-30-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ep-30-model
This model is a fine-tuned version of [anvitamanne/base-model](https://huggingface.co/anvitamanne/base-model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 603.7294
- Wer: 0.3891
- Cer: 0.1674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 313.4229 | 0.86 | 1000 | 512.8486 | 0.4031 | 0.1662 |
| 312.2834 | 1.72 | 2000 | 509.6784 | 0.3964 | 0.1643 |
| 303.4682 | 2.58 | 3000 | 521.8994 | 0.3944 | 0.1642 |
| 299.0293 | 3.44 | 4000 | 489.4095 | 0.3982 | 0.1629 |
| 286.2679 | 4.3 | 5000 | 516.8929 | 0.4048 | 0.1660 |
| 285.5344 | 5.17 | 6000 | 550.6877 | 0.4034 | 0.1672 |
| 278.8618 | 6.03 | 7000 | 549.6069 | 0.4035 | 0.1671 |
| 281.2304 | 6.89 | 8000 | 536.3907 | 0.3991 | 0.1653 |
| 281.8211 | 7.75 | 9000 | 569.9989 | 0.4124 | 0.1700 |
| 266.6356 | 8.61 | 10000 | 531.8161 | 0.4015 | 0.1670 |
| 263.5382 | 9.47 | 11000 | 573.9767 | 0.4035 | 0.1683 |
| 253.7602 | 10.33 | 12000 | 566.3726 | 0.4052 | 0.1695 |
| 276.6175 | 11.19 | 13000 | 576.7356 | 0.4027 | 0.1693 |
| 260.0645 | 12.05 | 14000 | 573.5627 | 0.3988 | 0.1665 |
| 257.4325 | 12.91 | 15000 | 569.2803 | 0.4014 | 0.1684 |
| 263.3572 | 13.78 | 16000 | 574.4833 | 0.4014 | 0.1680 |
| 271.3235 | 14.64 | 17000 | 568.9285 | 0.3937 | 0.1645 |
| 271.2437 | 15.5 | 18000 | 560.3303 | 0.3950 | 0.1660 |
| 272.6667 | 16.36 | 19000 | 559.9153 | 0.3968 | 0.1670 |
| 268.6009 | 17.22 | 20000 | 566.6968 | 0.3959 | 0.1666 |
| 274.8418 | 18.08 | 21000 | 578.3120 | 0.3931 | 0.1659 |
| 268.7353 | 18.94 | 22000 | 560.3764 | 0.3973 | 0.1675 |
| 253.8548 | 19.8 | 23000 | 572.3874 | 0.3913 | 0.1654 |
| 263.4848 | 20.66 | 24000 | 584.7192 | 0.3919 | 0.1655 |
| 261.7505 | 21.52 | 25000 | 585.3862 | 0.3948 | 0.1671 |
| 264.9873 | 22.38 | 26000 | 591.625 | 0.3908 | 0.1660 |
| 261.2484 | 23.25 | 27000 | 586.8426 | 0.3907 | 0.1670 |
| 261.3986 | 24.11 | 28000 | 598.3438 | 0.3882 | 0.1661 |
| 250.799 | 24.97 | 29000 | 593.3273 | 0.3905 | 0.1672 |
| 247.0973 | 25.83 | 30000 | 600.5747 | 0.3880 | 0.1669 |
| 253.7963 | 26.69 | 31000 | 605.4449 | 0.3899 | 0.1673 |
| 254.9214 | 27.55 | 32000 | 604.3179 | 0.3916 | 0.1674 |
| 248.1459 | 28.41 | 33000 | 605.5740 | 0.3914 | 0.1671 |
| 255.9482 | 29.27 | 34000 | 603.7294 | 0.3891 | 0.1674 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 3.6.0
- Tokenizers 0.15.2
|
vemedia/pok
|
vemedia
| 2025-06-25T05:34:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T05:32:23Z |
---
license: apache-2.0
---
|
videohdtv/video-trending-prajaktamali-viral-mms
|
videohdtv
| 2025-06-25T05:34:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:33:47Z |
02 minutes ago- video-trending-prajaktamali-viral-mms
The video-trending-prajaktamali-viral-mms video has become a trending topic across social media platforms, sparking widespread attention and concern.
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://t.co/w4GQblBMlq)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
4everStudent/Qwen2-0.5B-GRPO-test-5epochs
|
4everStudent
| 2025-06-25T05:30:52Z | 138 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T16:33:56Z |
---
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test-5epochs
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test-5epochs
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="4everStudent/Qwen2-0.5B-GRPO-test-5epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
videotvfusion/original-prajakta-mali-video-clip
|
videotvfusion
| 2025-06-25T05:28:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:28:20Z |
01 minutes ago- wAtch-original-prajakta-mali-video-clip
The original-prajakta-mali-video-clip video has become a trending topic across social media platforms, sparking widespread attention and concern.
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://t.co/w4GQblBMlq)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
videotvfusion/wAtch-prajakta-mali-viral-video-official
|
videotvfusion
| 2025-06-25T05:27:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:27:16Z |
01 minutes ago- wAtch-prajakta-mali-viral-video-official
The wAtch-prajakta-mali-viral-video-official video has become a trending topic across social media platforms, sparking widespread attention and concern.
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://t.co/w4GQblBMlq)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ankraj/mediguide
|
ankraj
| 2025-06-25T05:26:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T04:54:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
electric-otter/cgdtmoe
|
electric-otter
| 2025-06-25T05:23:43Z | 0 | 0 | null |
[
"en",
"base_model:electric-otter/cgdtmoe",
"base_model:finetune:electric-otter/cgdtmoe",
"license:mit",
"region:us"
] | null | 2025-06-23T13:49:51Z |
---
license: mit
language:
- en
base_model:
- electric-otter/cgdtmoe
new_version: electric-otter/cgdtmoe
---
|
videotvfusion/original.Video.juliana.marins.bbc.viral.clip.new
|
videotvfusion
| 2025-06-25T05:23:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:23:21Z |
01 minutes ago- original.Video.juliana.marins.bbc.viral.clip.new
The original.Video.juliana.marins.bbc.viral.clip.new video has become a trending topic across social media platforms, sparking widespread attention and concern.
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://t.co/w4GQblBMlq)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
VyoJ/SmolVLM-500M-Instruct-be-GGUF
|
VyoJ
| 2025-06-25T05:23:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"image-text-to-text",
"en",
"base_model:HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM-500M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-06-25T05:13:24Z |
---
license: apache-2.0
language:
- en
base_model:
- ggml-org/SmolVLM-500M-Instruct-GGUF
- HuggingFaceTB/SmolVLM-500M-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
---
# Model Information
SmolVLM-500M is a tiny multimodal model by HuggingFace. It was converted to the GGUF format by ggml-org.
I converted it to a big-endian format and uploaded for use on IBM z/OS machines.
**Model developer**: HuggingFace
**Model Architecture**: Based on Idefics3
**License**: Apache 2.0
For more details on the model, please go to Meta's original [model card](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct)
|
mmnga/Llama-3.1-Swallow-8B-Instruct-v0.5-gguf
|
mmnga
| 2025-06-25T05:22:58Z | 0 | 0 | null |
[
"gguf",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5",
"base_model:quantized:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5",
"license:llama3.3",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-25T03:23:30Z |
---
license:
- llama3.3
- gemma
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
base_model:
- tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
---
# Llama-3.1-Swallow-8B-Instruct-v0.5-gguf
[tokyotech-llmใใใๅ
ฌ้ใใฆใใLlama-3.1-Swallow-8B-Instruct-v0.5](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5)ใฎggufใใฉใผใใใๅคๆ็ใงใใ
imatrixใฎใใผใฟใฏ[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)ใไฝฟ็จใใฆไฝๆใใพใใใ
## Usage
```
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Llama-3.1-Swallow-8B-Instruct-v0.5-gguf' -n 128 -c 128 -p 'ใใชใใฏใใญใฎๆ็ไบบใงใใใฌใทใใๆใใฆ' -cnv
```
|
videotvfusion/Trends.Video.juliana.marins.bbc.viral.videos.official
|
videotvfusion
| 2025-06-25T05:20:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:20:31Z |
01 minutes ago- Trends.Video.juliana.marins.bbc.viral.videos.official
The Trends.Video.juliana.marins.bbc.viral.videos.official video has become a trending topic across social media platforms, sparking widespread attention and concern.
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://t.co/w4GQblBMlq)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
russdill/kronk
|
russdill
| 2025-06-25T05:16:51Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-06-24T17:27:05Z |
Piper TTS voice model trained samples of Kronk from the Emporer's New Groove.
Script and subtitles were used to pull audio samples. Audio samples with
excessive noise and cross-talk were dropped. Remaining samples were passed
through MVSep DnR v3 to remove background noise a music. A second trimming
step was performed to remove unsatisfactory samples or portions of samples
as well as trim silence. A final step was performed to normalize volume levels
of all samples. TextyMcSpeechy was used to train the model.
|
trongg/2410d46d-9b55-41ca-88b2-5388da286ccb_huhu
|
trongg
| 2025-06-25T05:15:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T05:11:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yujiepan/phi-moe-tiny-random
|
yujiepan
| 2025-06-25T05:14:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phimoe",
"text-generation",
"conversational",
"custom_code",
"base_model:microsoft/Phi-tiny-MoE-instruct",
"base_model:finetune:microsoft/Phi-tiny-MoE-instruct",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-06-25T05:13:46Z |
---
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- microsoft/Phi-tiny-MoE-instruct
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [microsoft/Phi-tiny-MoE-instruct](https://huggingface.co/microsoft/Phi-tiny-MoE-instruct).
### Example usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "yujiepan/phi-moe-tiny-random"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, trust_remote_code=True)
print(pipe('Write an article about Artificial Intelligence.'))
```
### Codes to create this repo:
```python
import json
from pathlib import Path
import torch
import accelerate
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
set_seed,
)
source_model_id = "microsoft/Phi-tiny-MoE-instruct"
save_folder = "/tmp/yujiepan/phi-moe-tiny-random"
processor = AutoTokenizer.from_pretrained(source_model_id)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
for k, v in config_json['auto_map'].items():
config_json['auto_map'][k] = f'{source_model_id}--{v}'
config_json['head_dim'] = 32
config_json['hidden_size'] = 64
config_json['intermediate_size'] = 128
config_json['num_attention_heads'] = 2
config_json['num_experts_per_tok'] = 2
config_json['num_hidden_layers'] = 2
config_json['num_key_value_heads'] = 1
config_json['num_local_experts'] = 8
config_json['tie_word_embeddings'] = True
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
automap = config_json['auto_map']
torch.set_default_dtype(torch.bfloat16)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.2)
print(name, p.shape)
model.save_pretrained(save_folder)
print(model)
with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f:
config_json = json.load(f)
config_json['auto_map'] = automap
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
for python_file in Path(save_folder).glob('*.py'):
python_file.unlink()
```
### Printing the model:
```text
PhiMoEForCausalLM(
(model): PhiMoEModel(
(embed_tokens): Embedding(32064, 64)
(layers): ModuleList(
(0-1): 2 x PhiMoEDecoderLayer(
(self_attn): PhiMoESdpaAttention(
(q_proj): Linear(in_features=64, out_features=64, bias=True)
(k_proj): Linear(in_features=64, out_features=32, bias=True)
(v_proj): Linear(in_features=64, out_features=32, bias=True)
(o_proj): Linear(in_features=64, out_features=64, bias=True)
(rotary_emb): PhiMoERotaryEmbedding()
)
(block_sparse_moe): PhiMoESparseMoeBlock(
(gate): Linear(in_features=64, out_features=8, bias=False)
(experts): ModuleList(
(0-7): 8 x PhiMoEBlockSparseTop2MLP(
(w1): Linear(in_features=64, out_features=128, bias=False)
(w2): Linear(in_features=128, out_features=64, bias=False)
(w3): Linear(in_features=64, out_features=128, bias=False)
(act_fn): SiLU()
)
)
)
(input_layernorm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
)
)
(norm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=64, out_features=32064, bias=True)
)
```
|
rIsHu009/Basic_Model
|
rIsHu009
| 2025-06-25T05:14:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-25T05:14:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
videotvfusion/original-jaipur-5-star-hotel-video-clip
|
videotvfusion
| 2025-06-25T05:12:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:12:17Z |
01 minutes ago- wATCH.original-jaipur-5-star-hotel-video-clip
The jaipur-5-star-hotel-video video has become a trending topic across social media platforms, sparking widespread attention and concern.
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://t.co/w4GQblBMlq)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://t.co/w4GQblBMlq)
<a href="https://t.co/w4GQblBMlq" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
CohenQu/sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard
|
CohenQu
| 2025-06-25T05:11:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceTB/smoltalk",
"base_model:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06",
"base_model:finetune:CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T04:04:08Z |
---
base_model: CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06
datasets: HuggingFaceTB/smoltalk
library_name: transformers
model_name: sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard
This model is a fine-tuned version of [CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06](https://huggingface.co/CohenQu/llama3_3b-finemath-4plus-flexible-ordering.00.06) on the [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CohenQu/sft_llama3_3b-finemath-4plus-flexible-ordering.00.06-4000_numina-cot-100k_orchard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/flexible-ordering/runs/lnymc5l7)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saquib34/tinyllama-linux-finetune
|
saquib34
| 2025-06-25T05:10:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2025-06-24T10:59:51Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
hasdal/c9ba8399-1003-4804-a60f-2f9ae22d455d
|
hasdal
| 2025-06-25T05:09:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:48:22Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Leanna-Perry-Viral-Video/OFFICIAL.Leanna.Perry.viral.Video.X.Trending.Now
|
New-Leanna-Perry-Viral-Video
| 2025-06-25T05:06:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T05:05:56Z |
[โค โค โค ๐ข๐
๐๐ผ๐ ๐ง๐พ๐๐พ ๐ณ๐ ๐
๐๐๐ (๐ถ๐บ๐๐ผ๐ ๐ฅ๐๐
๐
๐ต๐๐ฝ๐พ๐)](https://t.co/cJFoFjf13y)
[ โคโบ๐ฃ๐ฎ๐ถ๐ญ๐ซ๐ฎ๐ ๐ฃ (๐ฅ๐๐
๐
๐ต๐๐ฝ๐พ๐ ๐ซ๐๐๐) ](https://t.co/cJFoFjf13y)
[](https://t.co/cJFoFjf13y)
|
chinmay130000/deberta-v3-base-sst2-qnli
|
chinmay130000
| 2025-06-25T05:02:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-25T04:45:58Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: deberta-v3-base-sst2-qnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-sst2-qnli
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Accuracy: 0.6125
- Precision: 0.6125
- Recall: 1.0
- F1: 0.7597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.7054 | 1.0 | 63 | 0.6689 | 0.6125 | 0.6125 | 1.0 | 0.7597 |
| 0.6939 | 2.0 | 126 | 0.6728 | 0.6125 | 0.6125 | 1.0 | 0.7597 |
| 0.6864 | 3.0 | 189 | 0.6686 | 0.6125 | 0.6125 | 1.0 | 0.7597 |
| 0.6942 | 4.0 | 252 | 0.6804 | 0.6125 | 0.6125 | 1.0 | 0.7597 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sunyanming/fr-en-mixdata-model
|
sunyanming
| 2025-06-25T04:58:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T04:53:33Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** sunyanming
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FelixFu520/sd-class-butterflies-64
|
FelixFu520
| 2025-06-25T04:56:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-06-25T04:55:35Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('FelixFu520/sd-class-butterflies-64')
image = pipeline().images[0]
image
|
asa123ss/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q4_K_M-GGUF
|
asa123ss
| 2025-06-25T04:55:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2",
"base_model:quantized:huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-25T04:55:05Z |
---
base_model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
library_name: transformers
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# asa123ss/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2`](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo asa123ss/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo asa123ss/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo asa123ss/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo asa123ss/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2-Q4_K_M-GGUF --hf-file deepseek-r1-distill-qwen-14b-abliterated-v2-q4_k_m.gguf -c 2048
```
|
nntoan209/sqlcoder-7b-2-70e52c42-9159-419d-80a8-d72717ba0d36
|
nntoan209
| 2025-06-25T04:54:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:defog/sqlcoder-7b-2",
"base_model:finetune:defog/sqlcoder-7b-2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-24T19:13:32Z |
---
base_model: defog/sqlcoder-7b-2
library_name: transformers
model_name: sqlcoder-7b-2-70e52c42-9159-419d-80a8-d72717ba0d36
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for sqlcoder-7b-2-70e52c42-9159-419d-80a8-d72717ba0d36
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nntoan209/sqlcoder-7b-2-70e52c42-9159-419d-80a8-d72717ba0d36", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zhiqing/Qwen3-Reranker-4B-seq-cls-ONNX
|
zhiqing
| 2025-06-25T04:48:41Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"qwen3",
"text-classification",
"transformers",
"text-ranking",
"base_model:tomaarsen/Qwen3-Reranker-4B-seq-cls",
"base_model:quantized:tomaarsen/Qwen3-Reranker-4B-seq-cls",
"license:apache-2.0",
"region:us"
] |
text-ranking
| 2025-06-25T02:23:08Z |
---
license: apache-2.0
base_model:
- tomaarsen/Qwen3-Reranker-4B-seq-cls
tags:
- transformers
- sentence-transformers
pipeline_tag: text-ranking
---
# Qwen3-Reranker-4B-Seq-Cls
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
> [!NOTE]
> This is a copy of the [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) model, part of the [Qwen3 Reranker series](https://huggingface.co/collections/Qwen/qwen3-reranker-6841b22d0192d7ade9cdefea), modified as a sequence classification model instead. See [Updated Usage](#updated-usage) for details on how to use it, or [Original Usage](#original-usage) for the original usage.
>
> See [this discussion](https://huggingface.co/Qwen/Qwen3-Reranker-4B/discussions/3) for details on the conversion approach.
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (4B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 4B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Reranker-4B** has the following features:
- Model Type: Text Reranking
- Supported Languages: 100+ Languages
- Number of Paramaters: 4B
- Context Length: 32k
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
```python
import numpy as np
import onnxruntime as ort
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"zhiqing/Qwen3-Reranker-4B-seq-cls-ONNX",
padding_side="left",
trust_remote_code=True,
)
PREFIX = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
SUFFIX = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
DEFAULT_INS = "Given a web search query, retrieve relevant passages that answer the query"
def format_instruction(instruction, query, doc):
instruction = instruction or DEFAULT_INS
return f"{PREFIX}<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}{SUFFIX}"
queries = [
"Which planet is known as the Red Planet?",
]
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet.",
]
if len(queries) != len(documents):
if len(queries) == 1:
queries = queries * len(documents)
elif len(documents) == 1:
documents = documents * len(queries)
else:
raise ValueError("Length mismatch: either provide equal-length lists or one of them must have length 1.")
pairs = [format_instruction(DEFAULT_INS, q, d) for q, d in zip(queries, documents)]
enc = tokenizer(
pairs,
padding=True,
truncation=True,
max_length=8192,
return_tensors="np",
)
inputs = {
"input_ids": enc["input_ids"].astype(np.int64),
"attention_mask": enc["attention_mask"].astype(np.int64),
}
sess = ort.InferenceSession(
"Qwen3-Reranker-4B-seq-cls-ONNX/model.onnx",
providers=["CUDAExecutionProvider", "CPUExecutionProvider"],
)
logits = sess.run(None, inputs)[0].squeeze(-1)
scores = 1 / (1 + np.exp(-logits))
preds = (scores > 0.5).tolist()
print("logits :", logits.tolist())
print("scores :", scores.tolist())
print("yes/no :", preds)
```
๐ **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
## Evaluation
| Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
|------------------------------------|--------|---------|---------|---------|--------|-----------|----------|
| **Qwen3-Embedding-4B** | 4B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
| Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
| gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
| BGE-reranker-v2-m3 | 4B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
| **Qwen3-Reranker-4B** | 4B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
| **Qwen3-Reranker-4B** | 1.7B | **69.76** | 75.94 | 72.74 | 69.97 | 81.20 | **14.84** |
| **Qwen3-Reranker-8B** | 8B | 69.02 | **77.45** | **72.94** | **70.19** | **81.22** | 8.05 |
> **Note**:
> - Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
> - All scores are our runs based on the top-100 candidates retrieved by dense embedding model [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3-embedding,
title = {Qwen3-Embedding},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {May},
year = {2025}
}
```
|
mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi
|
mcryptoone
| 2025-06-25T04:41:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am screeching finicky kiwi",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-04T22:18:13Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am screeching finicky kiwi
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mcryptoone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-screeching_finicky_kiwi", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Nammy8/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_foraging_camel
|
Nammy8
| 2025-06-25T04:39:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am robust foraging camel",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-24T17:46:15Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_foraging_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am robust foraging camel
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_foraging_camel
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nammy8/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-robust_foraging_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
EleutherAI/SmolLM2-1.7B-magpie-ultra-v0.1-math-query
|
EleutherAI
| 2025-06-25T04:39:19Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T07:46:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
videos-jaipur-22-godam-hotel-viral-video/FULL.VIDEO.jaipur.22.godam.hotel.Viral.Video.Tutorial.Official
|
videos-jaipur-22-godam-hotel-viral-video
| 2025-06-25T04:38:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T04:38:36Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
mervp/SQLNova
|
mervp
| 2025-06-25T04:37:04Z | 0 | 0 | null |
[
"safetensors",
"text-generation",
"sql",
"lora",
"unsloth",
"Deepseek",
"conversational",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-18T06:44:56Z |
---
license: mit
base_model: Deepseek-R1
tags:
- text-generation
- sql
- lora
- unsloth
- Deepseek
---
# SQLNova - LoRA Fine-Tuned Deepseek 8B for Text-to-SQL Generation
**SQLNova** is a lightweight LoRA adapter fine-tuned on top of Unslothโs Architecture. It is designed to convert natural language instructions into valid SQL queries with minimal compute overhead, making it ideal for integration into data-driven applications or chat interfaces.
The model was trained on over **100,000 natural language-to-SQL pairs** spanning diverse domains, including Education, Technical, Healthcare, and more.
---
## Model Dependencies
- **Python Version**: `3.10`
- **libraries**: `unsloth`
- pip install unsloth
## Model Highlights
- **Base model**: `Deepseek R1 8B Distilled Llama`
- **Tokenizer**: Compatible with `Deepseek R1 8B Distilled Llama`
- **Fine tuned for**: Text to SQL Converter
- **Accuracy**: > 85%
- **Language**: English Natural Language Sentences finetuned
- **Format**: `safetensors`
### General Information
- **Model type:** Text Generation
- **Language:** English
- **License:** MIT
- **Base model:** DeepSeek R1 distilled on Llama3 8B
### Model Repository
- **Hugging Face Model Card:** [https://huggingface.co/mervp/SQLNova](https://huggingface.co/mervp/SQLNova)
---
## ๐ก Intended Uses
### Applications
- Generating SQL queries from natural language prompts
- Powering AI assistants for databases
- Enhancing SQL query builders or no-code data tools
- Automating analytics workflows
---
## Limitations
While **SQLNova** performs well in many real-world scenarios Since its a Reasoning Model, there are some limitations:
- It may produce **invalid SQL** for rare or malformed inputs in rarest cases.
- Assumes a **generic SQL dialect**, resembling MySQL/PostgreSQL syntax.
### Recommendation for Use of Model
- Always **validate generated SQL** before executing in production.
- Include **schema context** in prompts to improve accuracy.
- Use with **human-in-the-loop** review for critical applications.
Thanks for visiting and downloading this model!
If this model helped you, please consider leaving a like. Your support helps this model reach more developers and encourages further improvements if any.
---
## How to Use the Model
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="mervp/SQLNova",
max_seq_length=2048,
dtype=None,
)
prompt = """ You are an text to SQL query translator.
Users will ask you questions in English
and you will generate a SQL query based on their question
SQL has to be simple, The schema context has been provided to you.
### User Question:
{}
### Sql Context:
{}
### Sql Query:
{}
"""
question = "List the names of customers who have an account balance greater than 6000."
schema = """
CREATE TABLE socially_responsible_lending (
customer_id INT,
name VARCHAR(50),
account_balance DECIMAL(10, 2)
);
INSERT INTO socially_responsible_lending VALUES
(1, 'james Chad', 5000),
(2, 'Jane Rajesh', 7000),
(3, 'Alia Kapoor', 6000),
(4, 'Fatima Patil', 8000);
"""
inputs = tokenizer(
[prompt.format(question, schema, "")],
return_tensors="pt",
padding=True,
truncation=True
).to("cuda")
output = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.2,
top_p=0.9,
top_k=50,
do_sample=True
)
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
if "### Sql Query:" in decoded_output:
sql_query = decoded_output.split("### Sql Query:")[-1].strip()
else:
sql_query = decoded_output.strip()
print(sql_query)
|
CrashOnline/Nayan-OCR
|
CrashOnline
| 2025-06-25T04:36:13Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:38:33Z |
---
license: apache-2.0
---
|
yuan19/my-gpt2-taobao
|
yuan19
| 2025-06-25T04:33:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:10:42Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: my-gpt2-taobao
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-gpt2-taobao
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 12 | 7.9214 |
| No log | 2.0 | 24 | 7.1178 |
| No log | 3.0 | 36 | 6.8488 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Jsjssjkssksk/Jszmzk
|
Jsjssjkssksk
| 2025-06-25T04:31:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T04:31:50Z |
---
license: apache-2.0
---
|
18-Video-juliana-marins-bbc-tv/Trending.Video.juliana.marins.bbc.viral.videos
|
18-Video-juliana-marins-bbc-tv
| 2025-06-25T04:30:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T04:29:42Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralflix.xyz/leaked/?Jju"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
NICOPOI-9/segformer-b5-finetuned-morphpadver1-hgo-coord-v9_mix_resample_20epochs
|
NICOPOI-9
| 2025-06-25T04:28:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b5",
"base_model:finetune:nvidia/mit-b5",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2025-06-24T16:03:37Z |
---
library_name: transformers
license: other
base_model: nvidia/mit-b5
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-finetuned-morphpadver1-hgo-coord-v9_mix_resample_20epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b5-finetuned-morphpadver1-hgo-coord-v9_mix_resample_20epochs
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the NICOPOI-9/morphpad_coord_hgo_512_4class_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9212
- Mean Iou: 0.6264
- Mean Accuracy: 0.7657
- Overall Accuracy: 0.7693
- Accuracy 0-0: 0.7303
- Accuracy 0-90: 0.8137
- Accuracy 90-0: 0.7916
- Accuracy 90-90: 0.7271
- Iou 0-0: 0.6366
- Iou 0-90: 0.6229
- Iou 90-0: 0.6132
- Iou 90-90: 0.6329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 0-0 | Accuracy 0-90 | Accuracy 90-0 | Accuracy 90-90 | Iou 0-0 | Iou 0-90 | Iou 90-0 | Iou 90-90 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:-------------:|:-------------:|:--------------:|:-------:|:--------:|:--------:|:---------:|
| 1.3571 | 1.3638 | 4000 | 1.3517 | 0.1822 | 0.3201 | 0.3311 | 0.2216 | 0.2676 | 0.6249 | 0.1665 | 0.1575 | 0.1831 | 0.2565 | 0.1315 |
| 0.8199 | 2.7276 | 8000 | 1.2275 | 0.2731 | 0.4274 | 0.4361 | 0.3563 | 0.5122 | 0.5256 | 0.3155 | 0.2523 | 0.3082 | 0.2937 | 0.2381 |
| 0.8824 | 4.0914 | 12000 | 1.1421 | 0.3520 | 0.5198 | 0.5231 | 0.4839 | 0.5162 | 0.5954 | 0.4839 | 0.3342 | 0.3733 | 0.3710 | 0.3297 |
| 0.5435 | 5.4552 | 16000 | 0.9993 | 0.4242 | 0.5921 | 0.5979 | 0.5242 | 0.6641 | 0.6381 | 0.5420 | 0.4136 | 0.4409 | 0.4348 | 0.4076 |
| 0.8088 | 6.8190 | 20000 | 1.0559 | 0.4473 | 0.6166 | 0.6183 | 0.5671 | 0.5950 | 0.6749 | 0.6296 | 0.4524 | 0.4525 | 0.4513 | 0.4331 |
| 0.3228 | 8.1827 | 24000 | 0.9718 | 0.4925 | 0.6572 | 0.6604 | 0.5965 | 0.6694 | 0.7118 | 0.6511 | 0.4794 | 0.5029 | 0.4892 | 0.4985 |
| 0.8418 | 9.5465 | 28000 | 0.9748 | 0.5147 | 0.6735 | 0.6808 | 0.6234 | 0.7941 | 0.6989 | 0.5776 | 0.5228 | 0.5218 | 0.5217 | 0.4925 |
| 0.4066 | 10.9103 | 32000 | 0.9678 | 0.5360 | 0.6956 | 0.6985 | 0.6499 | 0.7135 | 0.7388 | 0.6803 | 0.5274 | 0.5461 | 0.5374 | 0.5330 |
| 0.3456 | 12.2741 | 36000 | 0.8965 | 0.5680 | 0.7221 | 0.7245 | 0.6491 | 0.7611 | 0.7252 | 0.7532 | 0.5661 | 0.5709 | 0.5625 | 0.5725 |
| 0.3544 | 13.6379 | 40000 | 0.8759 | 0.5800 | 0.7301 | 0.7343 | 0.7005 | 0.8018 | 0.7436 | 0.6744 | 0.5869 | 0.5831 | 0.5780 | 0.5721 |
| 0.3027 | 15.0017 | 44000 | 0.8860 | 0.5966 | 0.7437 | 0.7471 | 0.6909 | 0.7757 | 0.7824 | 0.7257 | 0.6008 | 0.5977 | 0.5931 | 0.5947 |
| 0.1839 | 16.3655 | 48000 | 0.9557 | 0.6063 | 0.7507 | 0.7537 | 0.7106 | 0.7862 | 0.7744 | 0.7317 | 0.6161 | 0.6070 | 0.5849 | 0.6170 |
| 0.1924 | 17.7293 | 52000 | 0.8912 | 0.6285 | 0.7682 | 0.7711 | 0.7340 | 0.8063 | 0.7894 | 0.7432 | 0.6382 | 0.6315 | 0.6125 | 0.6315 |
| 0.2531 | 19.0931 | 56000 | 0.9212 | 0.6264 | 0.7657 | 0.7693 | 0.7303 | 0.8137 | 0.7916 | 0.7271 | 0.6366 | 0.6229 | 0.6132 | 0.6329 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
LLM4Code/CodeARC_annotated_llama3.1
|
LLM4Code
| 2025-06-25T04:26:43Z | 32 | 1 | null |
[
"safetensors",
"llama",
"reasoning",
"agent",
"program",
"code",
"arxiv:2503.23145",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-29T00:29:03Z |
---
license: apache-2.0
base_model:
- meta-llama/Llama-3.1-8B-Instruct
tags:
- reasoning
- agent
- program
- code
---
**CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis**
Paper: https://arxiv.org/pdf/2503.23145
Code: https://github.com/Anjiang-Wei/CodeARC
Website: https://anjiang-wei.github.io/CodeARC-Website/
Dataset: https://huggingface.co/datasets/anjiangwei/CodeARC-Problems
10 Input-Output examples for each problem: https://huggingface.co/datasets/anjiangwei/CodeARC-Invocations
Fine-tuned models:
https://huggingface.co/LLM4Code/CodeARC_annotated_llama3.1
https://huggingface.co/LLM4Code/CodeARC_anonymous_llama3.1
```
@article{wei2025codearc,
title={CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis},
author={Wei, Anjiang and Suresh, Tarun and Cao, Jiannan and Kannan, Naveen and Wu, Yuheng and Yan, Kai and Teixeira, Thiago SFX and Wang, Ke and Aiken, Alex},
journal={arXiv preprint arXiv:2503.23145},
year={2025}
}
```
|
sam34738/new-muril-efficientnet-binary
|
sam34738
| 2025-06-25T04:26:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"binary_multimodal",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T04:25:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Thermostatic/neuraltranslate-27b-mt-nah-es-v1.1
|
Thermostatic
| 2025-06-25T04:19:36Z | 62 | 0 | null |
[
"safetensors",
"gemma3",
"Translation",
"Gemma 3",
"Spanish",
"Nahuatl",
"Machine translation",
"es",
"nah",
"dataset:Thermostatic/Axolotl-Spanish-Nahuatl-ShareGPT-Filtered-Splits",
"license:mit",
"region:us"
] | null | 2025-06-22T16:13:42Z |
---
license: mit
datasets:
- Thermostatic/Axolotl-Spanish-Nahuatl-ShareGPT-Filtered-Splits
language:
- es
- nah
tags:
- Translation
- Gemma 3
- Spanish
- Nahuatl
- Machine translation
---

# Model Card for NeuralTranslate
<!-- Provide a quick summary of what the model is/does. -->
THIS MODEL USES GEMMA 3 TEMPLATE.
This is the first official release of NeuralTranslate 27b Machine Translation: Spanish to Nahuatl. The base model is Gemma 3 27b Instruct after being trained in the Axolotl Spanish-Nahuatl Dataset for 9 epochs.
You can donate towards this project at my ko-fi! https://ko-fi.com/irvingernesto
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Irving Ernesto
- **Funded by:** Irving Ernesto
- **Model type:** Large Language Model
- **Language(s) (NLP):** Spanish & Nรกhuatl
- **License:** MIT
- **Finetuned from model [optional]:** Gemma 3 27b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Sekinal/neuraltranslate-nahuatl
- **Demo:** https://huggingface.co/spaces/Thermostatic/neuraltranslate-27b-mt-nah-es
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Translating between any other two pair of languages. E.g., trying to use the model to translate Nรกhuatl to English won't work. Even using the model to translate from Spanish to Nรกhuatl is not reliable.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Use the recommended settings for the Gemma 3 model for inference: `temperature = 1.0, top_p = 0.95, top_k = 64`
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
|
Thermostatic/neuraltranslate-27b-mt-nah-es-v1.2
|
Thermostatic
| 2025-06-25T04:19:22Z | 58 | 1 | null |
[
"safetensors",
"gemma3",
"Translation",
"Gemma 3",
"Spanish",
"Nahuatl",
"Machine translation",
"es",
"nah",
"dataset:Thermostatic/Axolotl-Spanish-Nahuatl-ShareGPT-Filtered-Splits",
"license:mit",
"region:us"
] | null | 2025-06-22T16:53:12Z |
---
license: mit
datasets:
- Thermostatic/Axolotl-Spanish-Nahuatl-ShareGPT-Filtered-Splits
language:
- es
- nah
tags:
- Translation
- Gemma 3
- Spanish
- Nahuatl
- Machine translation
---

# Model Card for NeuralTranslate
<!-- Provide a quick summary of what the model is/does. -->
THIS MODEL USES GEMMA 3 TEMPLATE.
This is the first official release of NeuralTranslate 27b Machine Translation: Spanish to Nahuatl. The base model is Gemma 3 27b Instruct after being trained in the Axolotl Spanish-Nahuatl Dataset for 10 epochs.
You can donate towards this project at my ko-fi! https://ko-fi.com/irvingernesto
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Irving Ernesto
- **Funded by:** Irving Ernesto
- **Model type:** Large Language Model
- **Language(s) (NLP):** Spanish & Nรกhuatl
- **License:** MIT
- **Finetuned from model [optional]:** Gemma 3 27b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Sekinal/neuraltranslate-nahuatl
- **Demo:** https://huggingface.co/spaces/Thermostatic/neuraltranslate-27b-mt-nah-es
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Translating between any other two pair of languages. E.g., trying to use the model to translate Nรกhuatl to English won't work. Even using the model to translate from Spanish to Nรกhuatl is not reliable.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Use the recommended settings for the Gemma 3 model for inference: `temperature = 1.0, top_p = 0.95, top_k = 64`
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
|
misiker/trainer_output
|
misiker
| 2025-06-25T04:18:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-24T01:47:03Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- wer
model-index:
- name: misiker/trainer_output
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: minds14
config: en-US
split: train[:500]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.9748427672955975
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# misiker/trainer_output
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 18.8318
- Wer: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- training_steps: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.8 | 20 | 37.9601 | 1.6562 |
| 41.3656 | 1.6 | 40 | 20.2900 | 0.9755 |
| 18.9017 | 2.4 | 60 | 10.7917 | 0.9734 |
| 18.9017 | 3.2 | 80 | 11.6330 | 0.9734 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.21.1
|
White0912/clip-trend-encoder
|
White0912
| 2025-06-25T04:14:26Z | 0 | 0 | null |
[
"pytorch",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T01:53:48Z |
---
license: apache-2.0
---
|
SayBitekhan/7-gemma3-27b-uz-lora
|
SayBitekhan
| 2025-06-25T04:14:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-27b-it",
"base_model:adapter:unsloth/gemma-3-27b-it",
"region:us"
] | null | 2025-06-25T04:07:36Z |
---
base_model: unsloth/gemma-3-27b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
New-videos-comatozze-viral-video-Clips/FULL.VIDEO.comatozze.Viral.Video.Tutorial.Official
|
New-videos-comatozze-viral-video-Clips
| 2025-06-25T04:11:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T04:10:59Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
zythammers/test
|
zythammers
| 2025-06-25T04:07:48Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:57:57Z |
---
license: apache-2.0
---
|
johngreendr1/53087c24-3d16-4267-8d92-a6385630345a
|
johngreendr1
| 2025-06-25T04:03:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"region:us"
] | null | 2025-06-25T04:03:26Z |
---
base_model: unsloth/llama-3-8b-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
zou-lab/BioMed-R1-32B
|
zou-lab
| 2025-06-25T03:59:39Z | 0 | 1 | null |
[
"safetensors",
"medical",
"text-generation",
"en",
"arxiv:2505.11462",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-06-25T03:10:00Z |
---
license: llama3.1
language:
- en
base_model:
- Qwen/Qwen2.5-32B-Instruct
pipeline_tag: text-generation
tags:
- medical
---
<div align="center">
<h1>
Disentangling Reasoning and Knowledge in Medical Large Language Models
</h1>
</div>
## Introduction
<div align="center">
<img src="overall_workflow.jpg" width="90%" alt="overall_workflow" />
</div>
Medical reasoning in large language models aims to replicate clinicians' cognitive processes when interpreting patient data and making diagnostic decisions. However, widely used benchmarksโsuch as MedQA-USMLE, MedMCQA, and PubMedQAโmix questions that require multi-step reasoning with those answerable through factual recall, complicating reasoning evaluation. To address this, we develop a PubMedBERT-based classifier (81\% agreement with expert annotations) to disentangle reasoning-heavy from knowledge-heavy questions across 11 biomedical QA benchmarks, revealing that only 32.8\% require complex reasoning. Using this stratification, we evaluate biomedical models (HuatuoGPT-o1, MedReason, m1) and general-domain models (DeepSeek-R1, o4-mini, Qwen3), and consistently observe lower performance on reasoning versus knowledge (e.g., HuatuoGPT-o1: 56.9\% vs. 44.8\%). To assess robustness, we conduct adversarial evaluations where models are prefilled with incorrect answers before being asked to reconsider. Biomedical models show substantial degradation in this setting (e.g., MedReason drops from 50.4\% to 24.4\%), while RL-trained and larger general-domain models are more resilient. Performance declines more on reasoning-heavy questions, highlighting the brittleness of current medical reasoning capabilities. Based on these insights, we train BioMed-R1 models using supervised fine-tuning and reinforcement learning on reasoning-heavy and adversarial examples, encouraging self-correction and backtracking. Our models achieve the strongest overall and adversarial performance among similarly sized biomedical LLMs, yet ample room for improvement remains. Incorporating additional reasoning-rich data sourcesโsuch as clinical case reportsโand developing training strategies that promote reasoning under uncertainty may further enhance robustness and diagnostic reliability.
<div align=center>
<img src="reasoning_vs_knowledge.png" width = "90%" alt="reason_vs_knowledge" align=center/>
</div>
BioMed-R1 can be used just like `Qwen/Qwen2.5-32B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("zou-lab/BioMed-R1-32B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("zou-lab/BioMed-R1-32B")
input_text = "Does vagus nerve contribute to the development of steatohepatitis and obesity in phosphatidylethanolamine N-methyltransferase deficient mice?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## ๐๐ผ Acknowledgement
We gratefully acknowledge the contributions of [HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1), [MedReason](https://github.com/UCSC-VLAA/MedReason), and [M1](https://github.com/UCSC-VLAA/m1).
We also thank the developers of the outstanding tools [Curator](https://github.com/bespokelabsai/curator), [TRL](https://github.com/huggingface/trl), [vLLM](https://github.com/vllm-project/vllm), and [SGLang](https://github.com/sgl-project/sglang), which made this work possible.
## ๐ Citation
```
@article{thapa2025disentangling,
title={Disentangling Reasoning and Knowledge in Medical Large Language Models},
author={Thapa, Rahul and Wu, Qingyang and Wu, Kevin and Zhang, Harrison and Zhang, Angela and Wu, Eric and Ye, Haotian and Bedi, Suhana and Aresh, Nevin and Boen, Joseph and Reddy, Shriya and Athiwaratkun, Ben and Song, Shuaiwen Leon and Zou, James},
journal={arXiv preprint arXiv:2505.11462},
year={2025},
url={https://arxiv.org/abs/2505.11462}
}
```
|
New-videos-Katrina-Lim-viral-video/FULL.VIDEO.Katrina.Lim.Viral.Video.Tutorial.Official
|
New-videos-Katrina-Lim-viral-video
| 2025-06-25T03:57:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:57:32Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
zou-lab/BioMed-R1-8B
|
zou-lab
| 2025-06-25T03:57:45Z | 0 | 0 | null |
[
"safetensors",
"medical",
"text-generation",
"en",
"arxiv:2505.11462",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-06-25T03:47:53Z |
---
license: llama3.1
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- medical
---
<div align="center">
<h1>
Disentangling Reasoning and Knowledge in Medical Large Language Models
</h1>
</div>
## Introduction
<div align="center">
<img src="overall_workflow.jpg" width="90%" alt="overall_workflow" />
</div>
Medical reasoning in large language models aims to replicate clinicians' cognitive processes when interpreting patient data and making diagnostic decisions. However, widely used benchmarksโsuch as MedQA-USMLE, MedMCQA, and PubMedQAโmix questions that require multi-step reasoning with those answerable through factual recall, complicating reasoning evaluation. To address this, we develop a PubMedBERT-based classifier (81\% agreement with expert annotations) to disentangle reasoning-heavy from knowledge-heavy questions across 11 biomedical QA benchmarks, revealing that only 32.8\% require complex reasoning. Using this stratification, we evaluate biomedical models (HuatuoGPT-o1, MedReason, m1) and general-domain models (DeepSeek-R1, o4-mini, Qwen3), and consistently observe lower performance on reasoning versus knowledge (e.g., HuatuoGPT-o1: 56.9\% vs. 44.8\%). To assess robustness, we conduct adversarial evaluations where models are prefilled with incorrect answers before being asked to reconsider. Biomedical models show substantial degradation in this setting (e.g., MedReason drops from 50.4\% to 24.4\%), while RL-trained and larger general-domain models are more resilient. Performance declines more on reasoning-heavy questions, highlighting the brittleness of current medical reasoning capabilities. Based on these insights, we train BioMed-R1 models using supervised fine-tuning and reinforcement learning on reasoning-heavy and adversarial examples, encouraging self-correction and backtracking. Our models achieve the strongest overall and adversarial performance among similarly sized biomedical LLMs, yet ample room for improvement remains. Incorporating additional reasoning-rich data sourcesโsuch as clinical case reportsโand developing training strategies that promote reasoning under uncertainty may further enhance robustness and diagnostic reliability.
<div align=center>
<img src="reasoning_vs_knowledge.png" width = "90%" alt="reason_vs_knowledge" align=center/>
</div>
BioMed-R1 can be used just like `Llama-3.1-8B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("zou-lab/BioMed-R1-8B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("zou-lab/BioMed-R1-8B")
input_text = "Does vagus nerve contribute to the development of steatohepatitis and obesity in phosphatidylethanolamine N-methyltransferase deficient mice?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## ๐๐ผ Acknowledgement
We gratefully acknowledge the contributions of [HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1), [MedReason](https://github.com/UCSC-VLAA/MedReason), and [M1](https://github.com/UCSC-VLAA/m1).
We also thank the developers of the outstanding tools [Curator](https://github.com/bespokelabsai/curator), [TRL](https://github.com/huggingface/trl), [vLLM](https://github.com/vllm-project/vllm), and [SGLang](https://github.com/sgl-project/sglang), which made this work possible.
## ๐ Citation
```
@article{thapa2025disentangling,
title={Disentangling Reasoning and Knowledge in Medical Large Language Models},
author={Thapa, Rahul and Wu, Qingyang and Wu, Kevin and Zhang, Harrison and Zhang, Angela and Wu, Eric and Ye, Haotian and Bedi, Suhana and Aresh, Nevin and Boen, Joseph and Reddy, Shriya and Athiwaratkun, Ben and Song, Shuaiwen Leon and Zou, James},
journal={arXiv preprint arXiv:2505.11462},
year={2025},
url={https://arxiv.org/abs/2505.11462}
}
```
|
crosstar/mistral_5_CoT_few_shot_12step
|
crosstar
| 2025-06-25T03:56:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T03:54:22Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
goaiguru/medical-qa-phi3-mini-mac
|
goaiguru
| 2025-06-25T03:54:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:54:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RedbeardNZ/BEN2
|
RedbeardNZ
| 2025-06-25T03:54:01Z | 0 | 0 |
ben2
|
[
"ben2",
"onnx",
"safetensors",
"BEN2",
"background-remove",
"mask-generation",
"Dichotomous image segmentation",
"background remove",
"foreground",
"background",
"remove background",
"pytorch",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"background removal",
"background-removal",
"image-segmentation",
"arxiv:2501.06230",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-06-25T03:54:01Z |
---
license: mit
pipeline_tag: image-segmentation
library_name: ben2
tags:
- BEN2
- background-remove
- mask-generation
- Dichotomous image segmentation
- background remove
- foreground
- background
- remove background
- pytorch
- model_hub_mixin
- pytorch_model_hub_mixin
- background removal
- background-removal
---
# BEN2: Background Erase Network
[](https://arxiv.org/abs/2501.06230)
[](https://github.com/PramaLLC/BEN2/)
[](https://backgrounderase.net)
## Overview
BEN2 (Background Erase Network) introduces a novel approach to foreground segmentation through its innovative Confidence Guided Matting (CGM) pipeline. The architecture employs a refiner network that targets and processes pixels where the base model exhibits lower confidence levels, resulting in more precise and reliable matting results. This model is built on BEN:
[](https://paperswithcode.com/sota/dichotomous-image-segmentation-on-dis-vd?p=ben-using-confidence-guided-matting-for)
## BEN2 access
BEN2 was trained on the DIS5k and our 22K proprietary segmentation dataset. Our enhanced model delivers superior performance in hair matting, 4K processing, object segmentation, and edge refinement. Our Base model is open source. To try the full model through our free web demo or integrate BEN2 into your project with our API:
- ๐ [backgrounderase.net](https://backgrounderase.net)
## Contact us
- For access to our commercial model email us at sales@prama.llc
- Our website: https://prama.llc/
- Follow us on X: https://x.com/PramaResearch/
## Installation
```
pip install -e "git+https://github.com/PramaLLC/BEN2.git#egg=ben2"
```
## Quick start code
```python
from ben2 import BEN_Base
from PIL import Image
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
file = "./image.png" # input image
model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()
image = Image.open(file)
foreground = model.inference(image, refine_foreground=False,) #Refine foreground is an extract postprocessing step that increases inference time but can improve matting edges. The default value is False.
foreground.save("./foreground.png")
```
## Batch image processing
```python
from ben2 import BEN_Base
from PIL import Image
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()
file1 = "./image1.png" # input image1
file2 = "./image2.png" # input image2
image1 = Image.open(file1)
image2 = Image.open(file2)
foregrounds = model.inference([image1, image2]) # We recommend that the batch size not exceed 3 for consumer GPUs as there are minimal inference gains due to our custom batch processing for the MVANet decoding steps.
foregrounds[0].save("./foreground1.png")
foregrounds[1].save("./foreground2.png")
```
# BEN2 video segmentation
[](https://www.youtube.com/watch?v=skEXiIHQcys)
## Video Segmentation
```bash
sudo apt update
sudo apt install ffmpeg
```
```python
from ben2 import BEN_Base
from PIL import Image
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
video_path = "/path_to_your_video.mp4"# input video
model = BEN_Base.from_pretrained("PramaLLC/BEN2")
model.to(device).eval()
model.segment_video(
video_path= video_path,
output_path="./", # Outputs will be saved as foreground.webm or foreground.mp4. The default value is "./"
fps=0, # If this is set to 0 CV2 will detect the fps in the original video. The default value is 0.
refine_foreground=False, #refine foreground is an extract postprocessing step that increases inference time but can improve matting edges. The default value is False.
batch=1, # We recommended that batch size not exceed 3 for consumer GPUs as there are minimal inference gains. The default value is 1.
print_frames_processed=True, #Informs you what frame is being processed. The default value is True.
webm = False, # This will output an alpha layer video but this defaults to mp4 when webm is false. The default value is False.
rgb_value= (0, 255, 0) # If you do not use webm this will be the RGB value of the resulting background only when webm is False. The default value is a green background (0,255,0).
)
```
**# BEN2 evaluation**

RMBG 2.0 did not preserve the DIS 5k validation dataset





|
rmdhirr/suja-lorab-ins-100
|
rmdhirr
| 2025-06-25T03:48:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-25T03:47:12Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
rrr13/uuu_fine_tune_gpt2
|
rrr13
| 2025-06-25T03:42:24Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:23:12Z |
---
license: apache-2.0
---
|
tofumagnate/L3.3-Unnamed-Exp-8B-V0.1-Q8_0-GGUF
|
tofumagnate
| 2025-06-25T03:42:04Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:TheSkullery/L3.3-Unnamed-Exp-8B-V0.1",
"base_model:quantized:TheSkullery/L3.3-Unnamed-Exp-8B-V0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:41:33Z |
---
base_model: TheSkullery/L3.3-Unnamed-Exp-8B-V0.1
tags:
- llama-cpp
- gguf-my-repo
---
# tofumagnate/L3.3-Unnamed-Exp-8B-V0.1-Q8_0-GGUF
This model was converted to GGUF format from [`TheSkullery/L3.3-Unnamed-Exp-8B-V0.1`](https://huggingface.co/TheSkullery/L3.3-Unnamed-Exp-8B-V0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheSkullery/L3.3-Unnamed-Exp-8B-V0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tofumagnate/L3.3-Unnamed-Exp-8B-V0.1-Q8_0-GGUF --hf-file l3.3-unnamed-exp-8b-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tofumagnate/L3.3-Unnamed-Exp-8B-V0.1-Q8_0-GGUF --hf-file l3.3-unnamed-exp-8b-v0.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tofumagnate/L3.3-Unnamed-Exp-8B-V0.1-Q8_0-GGUF --hf-file l3.3-unnamed-exp-8b-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tofumagnate/L3.3-Unnamed-Exp-8B-V0.1-Q8_0-GGUF --hf-file l3.3-unnamed-exp-8b-v0.1-q8_0.gguf -c 2048
```
|
Clip-18-Brazilian-tourist-who-fell-off/Clip.18.Brazilian.tourist.who.fell.off.Indonesian
|
Clip-18-Brazilian-tourist-who-fell-off
| 2025-06-25T03:39:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:39:16Z |
<a href="https://tinyurl.com/dhst9ys5" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
pennylin09/uuu_fine_tune_gpt2
|
pennylin09
| 2025-06-25T03:38:19Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:37:31Z |
---
license: apache-2.0
---
|
New-videos-Kirti-Patel-viral-video-Clips/FULL.VIDEO.Kirti.Patel.Viral.Video.Tutorial.Official
|
New-videos-Kirti-Patel-viral-video-Clips
| 2025-06-25T03:37:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:36:54Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
v1olet/LLM-CLS-Qwen2.5-1.5B-Instruct-Lora-SFT-3-Epoch
|
v1olet
| 2025-06-25T03:36:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T03:33:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hasancanonder/Llama-3.2-1B-Turkish-ORPO
|
hasancanonder
| 2025-06-25T03:35:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T03:33:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-videos-Kaur-Preet-viral-video-Clips/FULL.VIDEO.Kaur.Preet.Viral.Video.Tutorial.Official
|
New-videos-Kaur-Preet-viral-video-Clips
| 2025-06-25T03:33:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-25T03:33:28Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
crosstar/mistral_5_CoT_few_shot_8step
|
crosstar
| 2025-06-25T03:32:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T03:30:37Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cgifbribcgfbi/Llama-3.3-70B-chem-oc-nosynth
|
cgifbribcgfbi
| 2025-06-25T03:31:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"dataset:oc-nosynth_5000.jsonl",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"base_model:adapter:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"license:llama3.3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-25T00:50:32Z |
---
library_name: peft
license: llama3.3
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
tags:
- axolotl
- generated_from_trainer
datasets:
- oc-nosynth_5000.jsonl
model-index:
- name: Llama-3.3-70B-chem-oc-nosynth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0`
```yaml
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
load_in_8bit: false
load_in_4bit: true
adapter: qlora
wandb_name: Llama-3.3-70B-chem-oc-nosynth
output_dir: ./outputs/out/Llama-3.3-70B-chem-oc-nosynth
hub_model_id: cgifbribcgfbi/Llama-3.3-70B-chem-oc-nosynth
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
strict: false
datasets:
- path: oc-nosynth_5000.jsonl
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
# val_set_size: 0.05
# eval_sample_packing: False
save_safetensors: true
sequence_len: 3373
sample_packing: true
pad_to_sequence_len: true
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: false
lora_modules_to_save:
wandb_mode:
wandb_project: finetune-sweep
wandb_entity: gpoisjgqetpadsfke
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4 # This will be automatically adjusted based on available GPU memory
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 3
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# Llama-3.3-70B-chem-oc-nosynth
This model is a fine-tuned version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned) on the oc-nosynth_5000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 648
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Slill314/medical_energy
|
Slill314
| 2025-06-25T03:26:13Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:23:49Z |
---
license: apache-2.0
---
|
Cameron914/uuu_fine_tune_gpt2
|
Cameron914
| 2025-06-25T03:26:10Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T01:34:49Z |
---
license: apache-2.0
---
|
johngreendr1/72a53c5a-be56-4519-a53c-999041c64c96
|
johngreendr1
| 2025-06-25T03:24:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9",
"region:us"
] | null | 2025-06-25T02:09:27Z |
---
base_model: NousResearch/Nous-Capybara-7B-V1.9
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5
|
mlx-community
| 2025-06-25T03:24:34Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"dataset:tokyotech-llm/lmsys-chat-1m-synth",
"dataset:lmsys/lmsys-chat-1m",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5",
"base_model:finetune:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5",
"license:llama3.3",
"license:gemma",
"region:us"
] |
text-generation
| 2025-06-25T02:59:08Z |
---
language:
- en
- ja
library_name: mlx
pipeline_tag: text-generation
license:
- llama3.3
- gemma
model_type: llama
datasets:
- tokyotech-llm/lmsys-chat-1m-synth
- lmsys/lmsys-chat-1m
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
tags:
- mlx
---
# mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5
This model [mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5](https://huggingface.co/mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5) was
converted to MLX format from [tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.1-Swallow-8B-Instruct-v0.5")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
vishakr01/comp4_12
|
vishakr01
| 2025-06-25T03:24:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T03:22:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Doctor-Shotgun/MS3.2-24B-Magnum-Diamond-LoRA
|
Doctor-Shotgun
| 2025-06-25T03:23:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:adapter:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T17:45:00Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
tags:
- axolotl
- generated_from_trainer
---
# MS3.2-24B-Magnum-Diamond-LoRA
Magnum "Diamond" in reference to the intense heat and pressure (generated through matrix multiplications) needed to turn the coal-esque material of dry, assistant-tuned models into creative writing gems!
This model is finetuned from a text-only conversion of [mistralai/Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506) as an rsLoRA adapter. It uses the same data mix as [Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha), however with pre-tokenization and modifications to the custom loss masking.
The goal was to re-create the model at a smaller, more consumer-friendly size.
This model should perform competently with or without prepending character names, and with or without prefill.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
This is a minor version update over [Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-LoRA](https://huggingface.co/Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-LoRA) utilizing the new official instruct model from June 2025.
[Merged full model](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond)
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
## Training procedure
[WandB](https://wandb.ai/gum1h0x/24b-magnum-lora/runs/3zudxeg3?nw=nwuseradrianjuliusbeck)
There was a weird loss spike of unclear significance on one sample that was not seen using the same dataset on Mistral Small 3.1 Instruct, but the resulting model appears to be sane.
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
#base_model_ignore_patterns: "consolidated.safetensors"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: NewEden/magnum-v5-sft-prototype-ms3.2-lora
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: NewEden/magnum-v5-sft-proto-mistral-v7-tekken-rev1-32k
ds_type: parquet
type:
shuffle_merged_datasets: true
dataset_prepared_path: ./magnum-24b-data
val_set_size: 0.0
output_dir: ./magnum-24b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 24b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name: 24b-magnum-lora-mistral-3.2
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed:
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use paged_ademamix_8bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.1+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
|
tracylu00200/uuu_fine_tune_gpt2
|
tracylu00200
| 2025-06-25T03:23:40Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:32:03Z |
---
license: apache-2.0
---
|
Stonersheart/uuu_fine_tune_gpt2
|
Stonersheart
| 2025-06-25T03:23:17Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:23:48Z |
---
license: apache-2.0
---
|
JS1016/uuu_fine_tune_gpt2
|
JS1016
| 2025-06-25T03:22:38Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:26:09Z |
---
license: apache-2.0
---
|
ianwangnas/uuu_fine_tune_gpt2
|
ianwangnas
| 2025-06-25T03:21:42Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:27:00Z |
---
license: apache-2.0
---
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1894
|
luckeciano
| 2025-06-25T03:20:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T23:31:44Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1894
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1894
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-HessianMaskSentence-1e-3_1894", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/780e3vej)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
cjisnc/task1
|
cjisnc
| 2025-06-25T03:18:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:18:21Z |
---
license: apache-2.0
---
|
Doctor-Shotgun/MS3.1-24B-Magnum-Diamond-LoRA
|
Doctor-Shotgun
| 2025-06-25T03:17:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:adapter:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"region:us"
] | null | 2025-06-01T07:37:24Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
tags:
- axolotl
- generated_from_trainer
---
# MS3.1-24B-Magnum-Diamond-LoRA
### **June 2025: An updated version is available [here](https://huggingface.co/Doctor-Shotgun/MS3.2-24B-Magnum-Diamond-LoRA)!**
Magnum "Diamond" in reference to the intense heat and pressure (generated through matrix multiplications) needed to turn the coal-esque material of dry, assistant-tuned models into creative writing gems!
This model is finetuned from a text-only conversion of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) as an rsLoRA adapter. It uses the same data mix as [Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha), however with pre-tokenization and modifications to the custom loss masking.
The goal was to re-create the model at a smaller, more consumer-friendly size.
This model should perform competently with or without prepending character names, and with or without prefill.
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
[Merged full model](https://huggingface.co/Doctor-Shotgun/MS3.1-24B-Magnum-Diamond)
## Intended uses and limitations
This model is intended for creative writing and roleplay purposes.
It may show biases similar to those observed in contemporary LLM-based roleplay, in addition to those exhibited by the Claude 3 series of models and the base model.
All outputs should be considered fiction, as this model is not intended to provide factual information or advice.
## Training procedure
[WandB](https://wandb.ai/doctorshotgun/24b-magnum-lora/runs/763psl82?nw=nwuserdoctorshotgun)
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: ZeroAgency/Mistral-Small-3.1-24B-Instruct-2503-hf
#base_model_ignore_patterns: "consolidated.safetensors"
# optionally might have model_type or tokenizer_type
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: Doctor-Shotgun/magnum-v5-sft-prototype-ms3.1-lora
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-core/magnum-v5-sft-proto-mistral-v7-tekken-rev1-32k
ds_type: parquet
type:
shuffle_merged_datasets: true
dataset_prepared_path: /home/ubuntu/docshotgun/data/magnum-24b-data
val_set_size: 0.0
output_dir: /home/ubuntu/docshotgun/data/24b-lora-out
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: false
cut_cross_entropy: true
sequence_len: 32768
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
peft_use_rslora: true
lora_modules_to_save:
- embed_tokens
- lm_head
wandb_project: 24b-magnum-lora
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: offload
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: ./deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use paged_ademamix_8bit and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 2.0
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
JuzeZhang/language_of_motion
|
JuzeZhang
| 2025-06-25T03:17:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-17T21:06:56Z |
---
license: apache-2.0
---
|
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-map-11-5
|
veddhanth
| 2025-06-25T03:16:48Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-22T12:10:11Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a realistic portrait of sks face
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-map-11-5
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-map-11-5 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a realistic portrait of sks face to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-map-11-5/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
linfone2/uuu_fine_tune_taipower
|
linfone2
| 2025-06-25T03:13:28Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:43:19Z |
---
license: apache-2.0
---
|
vincrnt/uuu_fine_tune_taipower
|
vincrnt
| 2025-06-25T03:13:14Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:34:26Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.