modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-28 00:41:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-28 00:41:47
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qgallouedec/trpo-BipedalWalkerHardcore-v3-4007792454
|
qgallouedec
| 2024-04-10T19:22:48Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T12:51:38Z |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -99.80 +/- 14.91
name: mean_reward
verified: false
---
# **TRPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **TRPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 10000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/trpo-BipedalWalkerHardcore-v3-2419617561
|
qgallouedec
| 2024-04-10T19:22:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T12:54:56Z |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -135.23 +/- 34.37
name: mean_reward
verified: false
---
# **TRPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **TRPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 10000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/trpo-BipedalWalkerHardcore-v3-218488576
|
qgallouedec
| 2024-04-10T19:21:45Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T12:54:15Z |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -120.79 +/- 42.73
name: mean_reward
verified: false
---
# **TRPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **TRPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 10000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/trpo-HalfCheetah-v3-996877779
|
qgallouedec
| 2024-04-10T19:21:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"HalfCheetah-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"HalfCheetah-v4",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T15:17:55Z |
---
library_name: stable-baselines3
tags:
- HalfCheetah-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- HalfCheetah-v4
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v3
type: HalfCheetah-v3
metrics:
- type: mean_reward
value: 6347.49 +/- 124.64
name: mean_reward
verified: false
---
# **TRPO** Agent playing **HalfCheetah-v3**
This is a trained model of a **TRPO** agent playing **HalfCheetah-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env HalfCheetah-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env HalfCheetah-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env HalfCheetah-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env HalfCheetah-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env HalfCheetah-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env HalfCheetah-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('target_kl', 0.04),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/trpo-HalfCheetah-v3-3908371383
|
qgallouedec
| 2024-04-10T19:20:43Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"HalfCheetah-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"HalfCheetah-v4",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T17:09:49Z |
---
library_name: stable-baselines3
tags:
- HalfCheetah-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- HalfCheetah-v4
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v3
type: HalfCheetah-v3
metrics:
- type: mean_reward
value: 4959.26 +/- 82.26
name: mean_reward
verified: false
---
# **TRPO** Agent playing **HalfCheetah-v3**
This is a trained model of a **TRPO** agent playing **HalfCheetah-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env HalfCheetah-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env HalfCheetah-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env HalfCheetah-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env HalfCheetah-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env HalfCheetah-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env HalfCheetah-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('target_kl', 0.04),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/trpo-HalfCheetah-v3-3322000560
|
qgallouedec
| 2024-04-10T19:19:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"HalfCheetah-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"HalfCheetah-v4",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-28T17:10:55Z |
---
library_name: stable-baselines3
tags:
- HalfCheetah-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- HalfCheetah-v4
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v3
type: HalfCheetah-v3
metrics:
- type: mean_reward
value: 4674.40 +/- 60.10
name: mean_reward
verified: false
---
# **TRPO** Agent playing **HalfCheetah-v3**
This is a trained model of a **TRPO** agent playing **HalfCheetah-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env HalfCheetah-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env HalfCheetah-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env HalfCheetah-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env HalfCheetah-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env HalfCheetah-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env HalfCheetah-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('target_kl', 0.04),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
aless2212/codegemma-7b-it-onnx-fp16
|
aless2212
| 2024-04-10T19:18:54Z | 6 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T17:13:16Z |
---
library_name: transformers
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
pipeline_tag: text-generation
widget:
- text: >
<start_of_turn>user
Write a Python function to calculate the nth fibonacci number.<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
license: gemma
license_link: https://ai.google.dev/gemma/terms
---
# CodeGemma
Model Page
: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
Resources and Technical Documentation
: [Technical Report](https://goo.gle/codegemma)
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
Terms of Use
: [Terms](https://ai.google.dev/gemma/terms)
Authors
: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
| | [codegemma-2b](https://huggingface.co/google/codegemma-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [**codegemma-7b-it**](https://huggingface.co/google/codegemma-7b-it) |
|----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
| Code Completion | ✅ | ✅ | |
| Generation from natural language | | ✅ | ✅ |
| Chat | | | ✅ |
| Instruction Following | | | ✅ |
### Sample Usage
This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
### Inputs and Outputs
Inputs
: For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
: For instruction tuned model variant: natural language text or prompt
Outputs
: For pretrained model variants: fill-in-the-middle code completion, code and natural language
: For instruction tuned model variant: code and natural language
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
Using Gemma as the base model, CodeGemma 2B and 7B pretrained variants are further trained on an additional 500 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code.
### Training Data Processing
The following data pre-processing techniques were applied:
* FIM Pretrained CodeGemma models focus on fill-in-the-middle (FIM) tasks. The models are trained to work with both PSM and SPM modes. Our FIM settings are 80% FIM rate with 50-50 PSM/SPM.
* Dependency Graph-based Packing and Unit Test-based Lexical Packing techniques: To improve model alignment with real-world applications, we structured training examples at the project/repository level to co-locate the most relevant source files within each repository. Specifically, we employed two heuristic techniques: dependency graph-based packing and unit test-based lexical packing
* We developed a novel technique for splitting the documents into prefix, middle, and suffix to make the suffix start in a more syntactically natural point rather than purely random distribution.
* Safety: Similarly to Gemma, we deployed rigorous safety filtering including filtering personal data, CSAM filtering and other filtering based on content quality and safety in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Information about the hardware and software used to train the models.
### Hardware
CodeGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
## Evaluation Information
Model evaluation metrics and results.
### Evaluation Approach
We evaluate CodeGemma on a variety of academic benchmarks across several domains:
* Code completion benchmarks: HumanEval Single Line and Multiple Line Infilling
* Code generation benchmarks: HumanEval, MBPP, BabelCode (C++, C#, Go, Java, JavaScript, Kotlin, Python, Rust)
* Q&A: BoolQ, PIQA, TriviaQA
* Natural Language: ARC-Challenge, HellaSwag, MMLU, WinoGrande
* Math Reasoning: GSM8K, MATH
### Evaluation Results
#### Coding Benchmarks
Benchmark | 2B | 7B | 7B-IT
----------------------|-------|-------|------
HumanEval | 31.1 | 44.5 | 56.1
MBPP | 43.6 | 56.2 | 54.2
HumanEval Single Line | 78.41 | 76.09 | 68.25
HumanEval Multi Line | 51.44 | 58.44 | 20.05
BC HE C++ | 24.2 | 32.9 | 42.2
BC HE C# | 10.6 | 22.4 | 26.7
BC HE Go | 20.5 | 21.7 | 28.6
BC HE Java | 29.2 | 41.0 | 48.4
BC HE JavaScript | 21.7 | 39.8 | 46.0
BC HE Kotlin | 28.0 | 39.8 | 51.6
BC HE Python | 21.7 | 42.2 | 48.4
BC HE Rust | 26.7 | 34.1 | 36.0
BC MBPP C++ | 47.1 | 53.8 | 56.7
BC MBPP C# | 28.7 | 32.5 | 41.2
BC MBPP Go | 45.6 | 43.3 | 46.2
BC MBPP Java | 41.8 | 50.3 | 57.3
BC MBPP JavaScript | 45.3 | 58.2 | 61.4
BC MBPP Kotlin | 46.8 | 54.7 | 59.9
BC MBPP Python | 38.6 | 59.1 | 62.0
BC MBPP Rust | 45.3 | 52.9 | 53.5
#### Natural Language Benchmarks

## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
* Human evaluation on prompts covering content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach.
* Specific testing of cyber-offence capabilities, focusing on testing autonomous hacking capabilities and ensuring potential harms are limited.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details.
## Model Usage & Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Code Gemma models have a wide range of applications, which vary between IT and PT models. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Code Completion
: PT models can be used to complete code with an IDE extension
Code Generation
: IT model can be used to generate code with or without an IDE extension
Code Conversation
: IT model can power conversation interfaces which discuss code.
Code Education
: IT model supports interactive code learning experiences, aids in syntax correction or provides coding practice.
### Known Limitations
Large Language Models (LLMs) have limitations based on their training data and the inherent limitations of the technology. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details on the limitations of LLMs.
### Ethical Considerations & Risks
The development of large language models (LLMs) raises several ethical concerns. We have carefully considered multiple aspects in the development of these models. Please refer to [the same discussion](https://ai.google.dev/gemma/docs/model_card#ethical_considerations_and_risks) in the Gemma model card for model details.
### Benefits
At the time of release, this family of models provides high-performance open code-focused large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.
Using the coding benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
If you Download Here it means you abide by googles licence and policy about GEMMA.
|
aless2212/codegemma-7b-it-openvino-int4-cpu
|
aless2212
| 2024-04-10T19:18:27Z | 4 | 0 |
transformers
|
[
"transformers",
"openvino",
"gemma",
"text-generation",
"conversational",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T16:47:29Z |
---
license: gemma
---
If you Download Here it means you abide by googles licence and policy about GEMMA.
|
likhithasapu/human-ai-bert
|
likhithasapu
| 2024-04-10T19:16:41Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T11:32:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MatthieuJ/ECE-TW3-JRGL-V5
|
MatthieuJ
| 2024-04-10T19:16:39Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"davidkim205/Rhea-72b-v0.5",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T18:58:42Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- davidkim205/Rhea-72b-v0.5
- abacusai/Smaug-72B-v0.1
---
# ECE-TW3-JRGL-V5
ECE-TW3-JRGL-V5 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration
|
mistral-community/Mixtral-8x22B-v0.1-4bit
|
mistral-community
| 2024-04-10T19:14:32Z | 308 | 54 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-10T19:13:33Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
tags:
- moe
---
# Model Card for Mixtral-8x22B
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Model details:
- 🧠 ~176B params, ~44B active during inference
- 🪟 65K context window
- 🕵🏾♂️ 8 experts, 2 per token
- 🤓 32K vocab size
- ✂️ Similar tokenizer as 7B
Model quantized and added by [Prince Canuma](https://twitter.com/Prince_Canuma) using the full-precision model here: [v2ray/Mixtral-8x22B-v0.1](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1).
## Run the model in 4-bit precision
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistral-community/Mixtral-8x22B-v0.1-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Who is Einstein?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Notice
Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall.
|
Ashwinatgsk/mistral_7b_guanaco
|
Ashwinatgsk
| 2024-04-10T19:14:16Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-03T13:09:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanduntg/Mistral-Chatbot
|
sanduntg
| 2024-04-10T19:11:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T19:11:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
allknowingroger/Limmy-phi2-slerp
|
allknowingroger
| 2024-04-10T19:10:05Z | 292 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Phigments12",
"avinash31d/phi-2-slerp",
"base_model:avinash31d/phi-2-slerp",
"base_model:merge:avinash31d/phi-2-slerp",
"base_model:liminerity/Phigments12",
"base_model:merge:liminerity/Phigments12",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-24T13:54:56Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Phigments12
- avinash31d/phi-2-slerp
base_model:
- liminerity/Phigments12
- avinash31d/phi-2-slerp
license: apache-2.0
---
# Limmy-phi2-slerp
Limmy-phi2-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12)
* [avinash31d/phi-2-slerp](https://huggingface.co/avinash31d/phi-2-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Phigments12
layer_range: [0, 32]
- model: avinash31d/phi-2-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Phigments12
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Limmy-phi2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Trisert/mergekit-slerp-fxwtrsn
|
Trisert
| 2024-04-10T19:07:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Equall/Saul-7B-Base",
"base_model:merge:Equall/Saul-7B-Base",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:merge:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T19:03:50Z |
---
base_model:
- HuggingFaceH4/zephyr-7b-beta
- Equall/Saul-Base
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Equall/Saul-Base
layer_range: [0, 32]
- model: HuggingFaceH4/zephyr-7b-beta
layer_range: [0, 32]
merge_method: slerp
base_model: HuggingFaceH4/zephyr-7b-beta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
allknowingroger/LeeMerge-7B-slerp
|
allknowingroger
| 2024-04-10T19:07:35Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_32-7B-slerp",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"base_model:merge:Gille/StrangeMerges_32-7B-slerp",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:merge:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T06:44:24Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_32-7B-slerp
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
base_model:
- Gille/StrangeMerges_32-7B-slerp
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
license: apache-2.0
---
# LeeMerge-7B-slerp
LeeMerge-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_32-7B-slerp
layer_range: [0, 32]
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_32-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.5, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0.5, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/LeeMerge-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
hsmashiana/setfit-minilm-distilled
|
hsmashiana
| 2024-04-10T19:07:01Z | 5 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:ag_news",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-MiniLM-L3-v2",
"base_model:finetune:sentence-transformers/paraphrase-MiniLM-L3-v2",
"region:us"
] |
text-classification
| 2024-04-10T19:06:53Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- ag_news
metrics:
- accuracy
widget:
- text: Pakistani, US national arrested in New York bomb plot (AFP) AFP - A Pakistani
national and a US citizen were arrested over an alleged plot to blow up a subway
station in New York, city police commissioner Raymond Kelly said.
- text: 'Aon #39;comfortable #39; with past behaviour Aon, the world #39;s second
largest insurance broker, yesterday denied its brokers had ever steered business
to favoured insurance companies as a way of generating bigger commissions.'
- text: President Blasts Firing Notre Dame's outgoing president criticized the decision
to fire Tyrone Willingham after just three seasons, saying he was surprised the
coach was not given more time to try to succeed.
- text: 'Gold Fields investors snub bid Harmony #39;s bid to create the world #39;s
biggest gold miner suffered a blow yesterday when the first part of its offer
for South African rival Gold Fields received a lukewarm reception from shareholders.'
- text: Blood, knives, cage hint at atrocities (Chicago Tribune) Chicago Tribune -
Acting on information from a man who claimed to have escaped from militant Abu
Musab al-Zarqawi's network, the U.S. military over the weekend inspected a house
where intelligence officers believe hostages were detained, tortured and possibly
killed.
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-MiniLM-L3-v2
---
# SetFit with sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [ag_news](https://huggingface.co/datasets/ag_news) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 4 classes
- **Training Dataset:** [ag_news](https://huggingface.co/datasets/ag_news)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Bangladesh paralysed by strikes Opposition activists have brought many towns and cities in Bangladesh to a halt, the day after 18 people died in explosions at a political rally.'</li><li>'Will Putin #39;s Power Play Make Russia Safer? Outwardly, Russia has not changed since the barrage of terrorist attacks that culminated in the school massacre in Beslan on Sept.'</li><li>'S African TV in beheading blunder Public broadcaster SABC apologises after news bulletin shows footage of American beheaded in Iraq.'</li></ul> |
| 1 | <ul><li>'Desiring Stability Redskins coach Joe Gibbs expects few major personnel changes in the offseason and wants to instill a culture of stability in Washington.'</li><li>'Mutombo says he #39;s being traded to Rockets; will back up, mentor <b>...</b> Dikembe Mutombo, 38, has agreed to a sign-and-trade deal that will send him from the Chicago Bulls to Houston in exchange for Eric Piatkowski, Adrian Griffin and Mike Wilks, the Houston Chronicle reports.'</li><li>'They #146;re in the wrong ATHENS -- Matt Emmons was focusing on staying calm. He should have been focusing on the right target.'</li></ul> |
| 3 | <ul><li>'U2 pitches for Apple New iTunes ads airing during baseball games Tuesday will feature the advertising-shy Irish rockers.'</li><li>'A Cosmic Storm: When Galaxy Clusters Collide Astronomers have found what they are calling the perfect cosmic storm, a galaxy cluster pile-up so powerful its energy output is second only to the Big Bang.'</li><li>'Computer Assoc. Cuts 800 Jobs Worldwide (AP) AP - Computer Associates International Inc. announced a restructuring plan Wednesday that would reduce its work force by 800 people worldwide, saving the business software maker #36;70 million annually once the plan is fully implemented.'</li></ul> |
| 2 | <ul><li>'Economy builds steam in KC Fed district The economy continued to strengthen in September and early October in the Great Plains and Rocky Mountain regions covered by the Tenth Federal Reserve District, the Federal Reserve Bank of Kansas City said Wednesday.'</li><li>'RBC Centura CEO steps down RALEIGH, NC - The head of RBC Centura Bank has stepped down, and his successor will run the bank out of Raleigh rather than Rocky Mount, where the bank is based.'</li><li>'Oracle acquisition of PeopleSoft leads flurry of deals NEW YORK (CBS.MW) -- US stocks closed higher Monday, with the Dow Jones Industrial Average ending at its best level in more than nine months amid better-than-expected economic data and merger-related optimism.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hsmashiana/setfit-minilm-distilled")
# Run inference
preds = model("President Blasts Firing Notre Dame's outgoing president criticized the decision to fire Tyrone Willingham after just three seasons, saying he was surprised the coach was not given more time to try to succeed.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 14 | 38.204 | 143 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 244 |
| 1 | 243 |
| 2 | 242 |
| 3 | 271 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0008 | 1 | 0.9192 | - |
| 0.04 | 50 | 0.6426 | - |
| 0.08 | 100 | 0.0159 | - |
| 0.12 | 150 | 0.0024 | - |
| 0.16 | 200 | 0.0013 | - |
| 0.2 | 250 | 0.0011 | - |
| 0.24 | 300 | 0.0009 | - |
| 0.28 | 350 | 0.0006 | - |
| 0.32 | 400 | 0.0005 | - |
| 0.36 | 450 | 0.0005 | - |
| 0.4 | 500 | 0.0003 | - |
| 0.44 | 550 | 0.0003 | - |
| 0.48 | 600 | 0.0003 | - |
| 0.52 | 650 | 0.0004 | - |
| 0.56 | 700 | 0.0002 | - |
| 0.6 | 750 | 0.0002 | - |
| 0.64 | 800 | 0.0002 | - |
| 0.68 | 850 | 0.0002 | - |
| 0.72 | 900 | 0.0002 | - |
| 0.76 | 950 | 0.0002 | - |
| 0.8 | 1000 | 0.0002 | - |
| 0.84 | 1050 | 0.0002 | - |
| 0.88 | 1100 | 0.0001 | - |
| 0.92 | 1150 | 0.0002 | - |
| 0.96 | 1200 | 0.0002 | - |
| 1.0 | 1250 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.6.1
- Transformers: 4.38.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
whitemouse84/whisper-small-ru
|
whitemouse84
| 2024-04-10T19:05:29Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ru",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-10T07:30:02Z |
---
language:
- ru
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Ru
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: ru
split: None
args: 'config: ru, split: test'
metrics:
- name: Wer
type: wer
value: 131.35769718547476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Ru
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2080
- Wer: 131.3577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2013 | 0.61 | 1000 | 0.2301 | 130.4397 |
| 0.0753 | 1.21 | 2000 | 0.2159 | 131.7603 |
| 0.0902 | 1.82 | 3000 | 0.2046 | 129.7846 |
| 0.0394 | 2.43 | 4000 | 0.2080 | 131.3577 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.1
|
allknowingroger/FrankenLimmy-10B-passthrough
|
allknowingroger
| 2024-04-10T19:04:48Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"base_model:liminerity/M7-7b",
"base_model:finetune:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-25T10:12:11Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
base_model:
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
license: apache-2.0
---
# FrankenLimmy-10B-passthrough
FrankenLimmy-10B-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0,9]
- sources:
- model: liminerity/M7-7b
layer_range: [5,14]
- sources:
- model: liminerity/M7-7b
layer_range: [10,19]
- sources:
- model: liminerity/M7-7b
layer_range: [15,24]
- sources:
- model: liminerity/M7-7b
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/FrankenLimmy-10B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mingyue0101/super-instruct
|
mingyue0101
| 2024-04-10T19:02:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"region:us"
] | null | 2024-04-10T18:59:57Z |
---
library_name: peft
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
sanduntg/MistralLite
|
sanduntg
| 2024-04-10T18:57:32Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-10T18:52:11Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: MistralLite
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MistralLite
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
allknowingroger/M7-8B-passthrough
|
allknowingroger
| 2024-04-10T18:55:33Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"base_model:liminerity/M7-7b",
"base_model:finetune:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-28T08:19:33Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
base_model:
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
- liminerity/M7-7b
license: apache-2.0
---
# M7-8B-passthrough
M7-8B-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0,9]
- sources:
- model: liminerity/M7-7b
layer_range: [5,14]
- sources:
- model: liminerity/M7-7b
layer_range: [10,19]
- sources:
- model: liminerity/M7-7b
layer_range: [15,24]
- sources:
- model: liminerity/M7-7b
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/M7-8B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/QuantumBruins-7B-slerp
|
allknowingroger
| 2024-04-10T18:53:09Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rwitz2/go-bruins-v2.1.1",
"quantumaikr/quantum-dpo-v0.1",
"base_model:quantumaikr/quantum-dpo-v0.1",
"base_model:merge:quantumaikr/quantum-dpo-v0.1",
"base_model:rwitz2/go-bruins-v2.1.1",
"base_model:merge:rwitz2/go-bruins-v2.1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-28T19:21:04Z |
---
tags:
- merge
- mergekit
- lazymergekit
- rwitz2/go-bruins-v2.1.1
- quantumaikr/quantum-dpo-v0.1
base_model:
- rwitz2/go-bruins-v2.1.1
- quantumaikr/quantum-dpo-v0.1
license: apache-2.0
---
# QuantumBruins-7B-slerp
QuantumBruins-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rwitz2/go-bruins-v2.1.1](https://huggingface.co/rwitz2/go-bruins-v2.1.1)
* [quantumaikr/quantum-dpo-v0.1](https://huggingface.co/quantumaikr/quantum-dpo-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: rwitz2/go-bruins-v2.1.1
layer_range: [0, 32]
- model: quantumaikr/quantum-dpo-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: quantumaikr/quantum-dpo-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/QuantumBruins-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/PercivalMelodias-7B-slerp
|
allknowingroger
| 2024-04-10T18:51:19Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"AurelPx/Percival_01-7b-slerp",
"AurelPx/Meliodas-7b-dare",
"base_model:AurelPx/Meliodas-7b-dare",
"base_model:merge:AurelPx/Meliodas-7b-dare",
"base_model:AurelPx/Percival_01-7b-slerp",
"base_model:merge:AurelPx/Percival_01-7b-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-23T08:36:44Z |
---
tags:
- merge
- mergekit
- lazymergekit
- AurelPx/Percival_01-7b-slerp
- AurelPx/Meliodas-7b-dare
base_model:
- AurelPx/Percival_01-7b-slerp
- AurelPx/Meliodas-7b-dare
license: apache-2.0
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp)
* [AurelPx/Meliodas-7b-dare](https://huggingface.co/AurelPx/Meliodas-7b-dare)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AurelPx/Percival_01-7b-slerp
layer_range: [0, 32]
- model: AurelPx/Meliodas-7b-dare
layer_range: [0, 32]
merge_method: slerp
base_model: AurelPx/Percival_01-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
blockblockblock/Mengzi3-13B-Base-bpw2.5
|
blockblockblock
| 2024-04-10T18:51:17Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-03-31T07:17:11Z |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
---
<div align="left">
<h1>
Mengzi3-13B-Base
</h1>
</div>
<p align="center">
<img src="https://raw.githubusercontent.com/Langboat/Mengzi3/main/assets/mengzi_logo.png" width="200"/>
<p>
<p align="center">
🤗 <a href="https://huggingface.co/Langboat">Hugging Face</a> | 🤖 <a href="https://modelscope.cn/organization/Langboat">ModelScope</a> | <a href="https://gitee.com/mindspore/mindformers/blob/r1.0/research/mengzi3/mengzi3.md"><img src="https://www.mindspore.cn/_static/logo-zh-light.99fc9222.svg" width="50" style="white-space: nowrap;display: inline-block;overflow: hidden;max-width: 100%;"/></a> | <a href="https://wisemodel.cn/organization/Langboat">Wisemodel</a> | 💬 <a href="https://github.com/Langboat/Mengzi3/blob/main/assets/wechat.png">WeChat</a> | <a href="https://www.langboat.com/document/mengzi/mengzi-gpt/call">API</a> | <a href="https://www.langboat.com/portal/mengzi-gpt"><img src="https://raw.githubusercontent.com/Langboat/Mengzi3/main/assets/mengzi_logo.png" width="16" style="white-space: nowrap;display: inline-block;overflow: hidden;max-width: 100%;"/> 孟子GPT</a>
</p>
# 模型介绍/Introduction
本次开源Mengzi3 13B系列模型,模型的地址如下:
| | Mengzi3-13B-Base | Mengzi3-13B-Chat |
| :-: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------: |
| 13B | [🤗](https://huggingface.co/Langboat/Mengzi3-13B-Base) / [🤖](https://modelscope.cn/Langboat/Mengzi3-13B-Base) / [MindSpore](https://gitee.com/mindspore/mindformers/blob/r1.0/research/mengzi3/mengzi3.md) / [Wisemodel](https://wisemodel.cn/models/Langboat/Mengzi3-13B-Base) | 敬请期待 |
Mengzi3-13B模型基于Llama架构,语料精选自网页、百科、社交、媒体、新闻,以及高质量的开源数据集。通过在万亿tokens上进行多语言语料的继续训练,模型的中文能力突出并且兼顾多语言能力。
Mengzi3-13B is based on the Llama architecture, and the corpus is selected from web pages, encyclopedias, social networking, media, news, and high-quality open source data sets. By continuing to train multilingual corpus on trillions of tokens, the model has outstanding Chinese capabilities and takes into account multilingual capabilities.
# 快速开始/Quickstart
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Langboat/Mengzi3-13B-Base", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Langboat/Mengzi3-13B-Base", device_map="auto", trust_remote_code=True)
inputs = tokenizer('指令:回答以下问题。输入:介绍一下孟子。输出:', return_tensors='pt')
if torch.cuda.is_available():
inputs = inputs.to('cuda')
pred = model.generate(**inputs, max_new_tokens=512, repetition_penalty=1.01, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(pred[0], skip_special_tokens=True))
```
详细的模型推理和微调代码见[Github](https://github.com/Langboat/Mengzi3)
Detailed code of model reasoning and finetune see [Github](https://github.com/Langboat)
# 性能评测/Evaluation
Mengzi3-13B-Base在各项基准测试中与同等参数量大语言模型相比,语言能力成绩领先,数学和编程能力位于前列。
Mengzi3-13B-Base leads in language proficiency and is at the forefront in math and programming proficiency compared to the equivalent large language model in various benchmark tests.
| | MMLU | CMMLU | OCNLI | GSM8K | HumanEval |
| :------------------------: | :---------------------: | :---------------------: | :---------------------: | :---: | :-------: |
| Baichuan2-13B-Base | 0.530 | 0.489 | 0.433 | 0.528 | 0.171 |
| Qwen-14B | 0.589 | 0.539 | 0.550 | 0.613 | 0.323 |
| ChatGLM3-6B-base | 0.551 | 0.495 | 0.754 | 0.723 | - |
| InternLM2-20B | 0.610 | 0.538 | 0.650 | 0.761 | 0.488 |
| Skywork-13B-base | 0.557 | 0.524 | 0.426 | 0.558 | - |
| LingoWhale-8B | 0.541 | 0.495 | 0.352 | 0.550 | 0.329 |
| DeepSeek-7B | 0.436 | 0.424 | 0.356 | 0.174 | 0.262 |
| DeepSeek-MoE-16B-base | 0.423 | 0.388 | 0.342 | 0.188 | 0.268 |
| MindSource-7B | 0.498 | 0.425 | 0.528 | - | - |
| **Mengzi3-13B-Base** | **0.651 (+6.7%)** | **0.588 (+9.1%)** | **0.776 (+2.9%)** | 0.631 | 0.287 |
> 以上结果基于5-shot,MMLU/CMMLU/OCNLI结果来自[FlagEval](https://flageval.baai.ac.cn/)
>
> The above results are based on 5-shot,MMLU/CMMLU/OCNLI results from [FlagEval](https://flageval.baai.ac.cn/)
# 协议/License Agreement
Mengzi3-13B-Base依照Apache 2.0协议开源,对学术研究完全开放,同时支持免费商用。如需申请商业许可证,请[联系我们](https://www.langboat.com/form?p=3),其他商务合作请联系[bd@langboat.com](mailto:bd@langboat.com)。
Mengzi3-13B-Base is open source under the Apache 2.0 protocol, fully open for academic research, and free for commercial use. If you need to apply for business license, please [contact us](https://www.langboat.com/en/form?p=3), other business cooperation, please contact [bd@langboat.com](mailto:bd@langboat.com).
|
automerger/MergerixStrangemerges_32-7B
|
automerger
| 2024-04-10T18:49:26Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"base_model:finetune:Gille/StrangeMerges_32-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T18:48:33Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- Gille/StrangeMerges_32-7B-slerp
---
# MergerixStrangemerges_32-7B
MergerixStrangemerges_32-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: MiniMoog/Mergerix-7b-v0.3
# No parameters necessary for base model
- model: Gille/StrangeMerges_32-7B-slerp
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: MiniMoog/Mergerix-7b-v0.3
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/MergerixStrangemerges_32-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
QuantFactory/YamshadowExperiment28-7B-GGUF
|
QuantFactory
| 2024-04-10T18:43:54Z | 20 | 0 | null |
[
"gguf",
"mistral",
"text-generation-inference",
"automerger",
"text-generation",
"base_model:automerger/YamshadowExperiment28-7B",
"base_model:quantized:automerger/YamshadowExperiment28-7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-04-10T12:57:30Z |
---
license: apache-2.0
base_model: automerger/YamshadowExperiment28-7B
pipeline_tag: text-generation
tags:
- mistral
- text-generation-inference
- automerger
inference: false
---
# YamshadowExperiment28-7B-GGUF
- Quantized version of [YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
- Created using llama.cpp
## Available Quants
- IQ3_S
- Q2_K
- Q3_K_L
- Q3_K_M
- Q3_K_S
- Q4_0
- Q4_K_M
- Q4_K_S
- Q5_0
- Q5_K_M
- Q5_K_S
- Q6_K
- Q8_0
|
allknowingroger/LadybirdGonzo-7B-slerp
|
allknowingroger
| 2024-04-10T18:43:38Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Badgids/Gonzo-Chat-7B",
"bobofrut/ladybird-base-7B-v8",
"base_model:Badgids/Gonzo-Chat-7B",
"base_model:merge:Badgids/Gonzo-Chat-7B",
"base_model:bobofrut/ladybird-base-7B-v8",
"base_model:merge:bobofrut/ladybird-base-7B-v8",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-31T07:19:52Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Badgids/Gonzo-Chat-7B
- bobofrut/ladybird-base-7B-v8
base_model:
- Badgids/Gonzo-Chat-7B
- bobofrut/ladybird-base-7B-v8
license: apache-2.0
---
# LadybirdGonzo-7B-slerp
LadybirdGonzo-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Badgids/Gonzo-Chat-7B](https://huggingface.co/Badgids/Gonzo-Chat-7B)
* [bobofrut/ladybird-base-7B-v8](https://huggingface.co/bobofrut/ladybird-base-7B-v8)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Badgids/Gonzo-Chat-7B
layer_range: [0, 32]
- model: bobofrut/ladybird-base-7B-v8
layer_range: [0, 32]
merge_method: slerp
base_model: Badgids/Gonzo-Chat-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/LadybirdGonzo-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/LadybirdPercival-7B-slerp
|
allknowingroger
| 2024-04-10T18:40:52Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/LadybirdGonzo-7B-slerp",
"Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B",
"base_model:Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B",
"base_model:merge:Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B",
"base_model:allknowingroger/LadybirdGonzo-7B-slerp",
"base_model:merge:allknowingroger/LadybirdGonzo-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-31T16:30:57Z |
---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/LadybirdGonzo-7B-slerp
- Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
base_model:
- allknowingroger/LadybirdGonzo-7B-slerp
- Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
license: apache-2.0
---
# LadybirdPercival-7B-slerp
LadybirdPercival-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/LadybirdGonzo-7B-slerp](https://huggingface.co/allknowingroger/LadybirdGonzo-7B-slerp)
* [Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B](https://huggingface.co/Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/LadybirdGonzo-7B-slerp
layer_range: [0, 32]
- model: Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Ksgk-fy/M7Percival_010.14-0.33-0.6-0.72-0.02-0.65-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/LadybirdPercival-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
yimiwang/flan-t5-large-peft-combinedhub
|
yimiwang
| 2024-04-10T18:39:23Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"license:apache-2.0",
"region:us"
] | null | 2024-04-08T04:17:31Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: google/flan-t5-large
metrics:
- rouge
model-index:
- name: flan-t5-large-peft-combinedhub
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-peft-combinedhub
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7588
- Rouge1: 43.5001
- Rouge2: 17.8611
- Rougel: 31.5148
- Rougelsum: 40.2359
- Gen Len: 98.7718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.9097 | 1.0 | 2812 | 1.7698 | 43.3373 | 17.6766 | 31.4769 | 40.0561 | 97.4492 |
| 1.8901 | 2.0 | 5624 | 1.7603 | 43.4367 | 17.8114 | 31.5721 | 40.1962 | 100.2230 |
| 1.8854 | 3.0 | 8436 | 1.7588 | 43.5001 | 17.8611 | 31.5148 | 40.2359 | 98.7718 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
qgallouedec/REINFORCE-CartPole-v1
|
qgallouedec
| 2024-04-10T18:38:23Z | 0 | 0 | null |
[
"reinforcement-learning",
"CartPole-v1",
"region:us"
] |
reinforcement-learning
| 2024-04-10T18:38:20Z |
---
tags:
- reinforcement-learning
- CartPole-v1
---
|
allknowingroger/JupiterMerge-7B-slerp
|
allknowingroger
| 2024-04-10T18:35:33Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Jupiter-k-7B-slerp",
"Gille/StrangeMerges_21-7B-slerp",
"conversational",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:merge:Gille/StrangeMerges_21-7B-slerp",
"base_model:Kukedlc/Jupiter-k-7B-slerp",
"base_model:merge:Kukedlc/Jupiter-k-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-01T16:24:57Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Jupiter-k-7B-slerp
- Gille/StrangeMerges_21-7B-slerp
base_model:
- Kukedlc/Jupiter-k-7B-slerp
- Gille/StrangeMerges_21-7B-slerp
license: apache-2.0
---
# JupiterMerge-7B-slerp
JupiterMerge-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Jupiter-k-7B-slerp](https://huggingface.co/Kukedlc/Jupiter-k-7B-slerp)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Jupiter-k-7B-slerp
layer_range: [0, 32]
- model: Gille/StrangeMerges_21-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/Jupiter-k-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/JupiterMerge-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
BuroIdentidadDigital/pasaporte_Mex_v1
|
BuroIdentidadDigital
| 2024-04-10T18:34:55Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-04-10T18:20:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
allknowingroger/TripleMerge-7B-Ties
|
allknowingroger
| 2024-04-10T18:32:36Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/MultiverseEx26-7B-slerp",
"allknowingroger/limyClown-7B-slerp",
"allknowingroger/LeeMerge-7B-slerp",
"base_model:allknowingroger/LeeMerge-7B-slerp",
"base_model:merge:allknowingroger/LeeMerge-7B-slerp",
"base_model:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:merge:allknowingroger/MultiverseEx26-7B-slerp",
"base_model:allknowingroger/limyClown-7B-slerp",
"base_model:merge:allknowingroger/limyClown-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-02T08:03:18Z |
---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/LeeMerge-7B-slerp
base_model:
- allknowingroger/MultiverseEx26-7B-slerp
- allknowingroger/limyClown-7B-slerp
- allknowingroger/LeeMerge-7B-slerp
license: apache-2.0
---
# TripleMerge-7B-Ties
TripleMerge-7B-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/MultiverseEx26-7B-slerp](https://huggingface.co/allknowingroger/MultiverseEx26-7B-slerp)
* [allknowingroger/limyClown-7B-slerp](https://huggingface.co/allknowingroger/limyClown-7B-slerp)
* [allknowingroger/LeeMerge-7B-slerp](https://huggingface.co/allknowingroger/LeeMerge-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: allknowingroger/MultiverseEx26-7B-slerp
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: allknowingroger/limyClown-7B-slerp
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: allknowingroger/LeeMerge-7B-slerp
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: allknowingroger/limyClown-7B-slerp
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/TripleMerge-7B-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/TripleMerge2-7B-Ties
|
allknowingroger
| 2024-04-10T18:31:10Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/LimyQstar-7B-slerp",
"allknowingroger/JaskierMistral-7B-slerp",
"allknowingroger/LimmyAutomerge-7B-slerp",
"base_model:allknowingroger/JaskierMistral-7B-slerp",
"base_model:merge:allknowingroger/JaskierMistral-7B-slerp",
"base_model:allknowingroger/LimmyAutomerge-7B-slerp",
"base_model:merge:allknowingroger/LimmyAutomerge-7B-slerp",
"base_model:allknowingroger/LimyQstar-7B-slerp",
"base_model:merge:allknowingroger/LimyQstar-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-02T12:00:20Z |
---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/LimyQstar-7B-slerp
- allknowingroger/JaskierMistral-7B-slerp
- allknowingroger/LimmyAutomerge-7B-slerp
base_model:
- allknowingroger/LimyQstar-7B-slerp
- allknowingroger/JaskierMistral-7B-slerp
- allknowingroger/LimmyAutomerge-7B-slerp
license: apache-2.0
---
# TripleMerge2-7B-Ties
TripleMerge2-7B-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/LimyQstar-7B-slerp](https://huggingface.co/allknowingroger/LimyQstar-7B-slerp)
* [allknowingroger/JaskierMistral-7B-slerp](https://huggingface.co/allknowingroger/JaskierMistral-7B-slerp)
* [allknowingroger/LimmyAutomerge-7B-slerp](https://huggingface.co/allknowingroger/LimmyAutomerge-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: allknowingroger/LimyQstar-7B-slerp
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: allknowingroger/JaskierMistral-7B-slerp
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: allknowingroger/LimmyAutomerge-7B-slerp
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: allknowingroger/LimyQstar-7B-slerp
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/TripleMerge2-7B-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Saqlaintaswar1/safespace
|
Saqlaintaswar1
| 2024-04-10T18:30:16Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"music",
"depth-estimation",
"aa",
"dataset:m-a-p/COIG-CQIA",
"license:apache-2.0",
"region:us"
] |
depth-estimation
| 2024-04-10T18:28:00Z |
---
license: apache-2.0
datasets:
- m-a-p/COIG-CQIA
language:
- aa
metrics:
- accuracy
- bertscore
library_name: adapter-transformers
pipeline_tag: depth-estimation
tags:
- music
---
|
fmenegui/cinc5
|
fmenegui
| 2024-04-10T18:29:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-04-10T18:14:48Z |
#4 /home/fdias/repositorios/media/experiments/train_submissao/logs/Normal_224_finetuneFromCincAndNat/(sub10fold)2024-04-09_22-06-32
|
adithyac2207/ppo-LunarLander-v2
|
adithyac2207
| 2024-04-10T18:29:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-10T18:27:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.41 +/- 13.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
blockblockblock/Mengzi3-13B-Base-bpw2.25
|
blockblockblock
| 2024-04-10T18:28:50Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-04-10T18:24:44Z |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
---
<div align="left">
<h1>
Mengzi3-13B-Base
</h1>
</div>
<p align="center">
<img src="https://raw.githubusercontent.com/Langboat/Mengzi3/main/assets/mengzi_logo.png" width="200"/>
<p>
<p align="center">
🤗 <a href="https://huggingface.co/Langboat">Hugging Face</a> | 🤖 <a href="https://modelscope.cn/organization/Langboat">ModelScope</a> | <a href="https://gitee.com/mindspore/mindformers/blob/r1.0/research/mengzi3/mengzi3.md"><img src="https://www.mindspore.cn/_static/logo-zh-light.99fc9222.svg" width="50" style="white-space: nowrap;display: inline-block;overflow: hidden;max-width: 100%;"/></a> | <a href="https://wisemodel.cn/organization/Langboat">Wisemodel</a> | 💬 <a href="https://github.com/Langboat/Mengzi3/blob/main/assets/wechat.png">WeChat</a> | <a href="https://www.langboat.com/document/mengzi/mengzi-gpt/call">API</a> | <a href="https://www.langboat.com/portal/mengzi-gpt"><img src="https://raw.githubusercontent.com/Langboat/Mengzi3/main/assets/mengzi_logo.png" width="16" style="white-space: nowrap;display: inline-block;overflow: hidden;max-width: 100%;"/> 孟子GPT</a>
</p>
# 模型介绍/Introduction
本次开源Mengzi3 13B系列模型,模型的地址如下:
| | Mengzi3-13B-Base | Mengzi3-13B-Chat |
| :-: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------: |
| 13B | [🤗](https://huggingface.co/Langboat/Mengzi3-13B-Base) / [🤖](https://modelscope.cn/Langboat/Mengzi3-13B-Base) / [MindSpore](https://gitee.com/mindspore/mindformers/blob/r1.0/research/mengzi3/mengzi3.md) / [Wisemodel](https://wisemodel.cn/models/Langboat/Mengzi3-13B-Base) | 敬请期待 |
Mengzi3-13B模型基于Llama架构,语料精选自网页、百科、社交、媒体、新闻,以及高质量的开源数据集。通过在万亿tokens上进行多语言语料的继续训练,模型的中文能力突出并且兼顾多语言能力。
Mengzi3-13B is based on the Llama architecture, and the corpus is selected from web pages, encyclopedias, social networking, media, news, and high-quality open source data sets. By continuing to train multilingual corpus on trillions of tokens, the model has outstanding Chinese capabilities and takes into account multilingual capabilities.
# 快速开始/Quickstart
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Langboat/Mengzi3-13B-Base", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Langboat/Mengzi3-13B-Base", device_map="auto", trust_remote_code=True)
inputs = tokenizer('指令:回答以下问题。输入:介绍一下孟子。输出:', return_tensors='pt')
if torch.cuda.is_available():
inputs = inputs.to('cuda')
pred = model.generate(**inputs, max_new_tokens=512, repetition_penalty=1.01, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(pred[0], skip_special_tokens=True))
```
详细的模型推理和微调代码见[Github](https://github.com/Langboat/Mengzi3)
Detailed code of model reasoning and finetune see [Github](https://github.com/Langboat)
# 性能评测/Evaluation
Mengzi3-13B-Base在各项基准测试中与同等参数量大语言模型相比,语言能力成绩领先,数学和编程能力位于前列。
Mengzi3-13B-Base leads in language proficiency and is at the forefront in math and programming proficiency compared to the equivalent large language model in various benchmark tests.
| | MMLU | CMMLU | OCNLI | GSM8K | HumanEval |
| :------------------------: | :---------------------: | :---------------------: | :---------------------: | :---: | :-------: |
| Baichuan2-13B-Base | 0.530 | 0.489 | 0.433 | 0.528 | 0.171 |
| Qwen-14B | 0.589 | 0.539 | 0.550 | 0.613 | 0.323 |
| ChatGLM3-6B-base | 0.551 | 0.495 | 0.754 | 0.723 | - |
| InternLM2-20B | 0.610 | 0.538 | 0.650 | 0.761 | 0.488 |
| Skywork-13B-base | 0.557 | 0.524 | 0.426 | 0.558 | - |
| LingoWhale-8B | 0.541 | 0.495 | 0.352 | 0.550 | 0.329 |
| DeepSeek-7B | 0.436 | 0.424 | 0.356 | 0.174 | 0.262 |
| DeepSeek-MoE-16B-base | 0.423 | 0.388 | 0.342 | 0.188 | 0.268 |
| MindSource-7B | 0.498 | 0.425 | 0.528 | - | - |
| **Mengzi3-13B-Base** | **0.651 (+6.7%)** | **0.588 (+9.1%)** | **0.776 (+2.9%)** | 0.631 | 0.287 |
> 以上结果基于5-shot,MMLU/CMMLU/OCNLI结果来自[FlagEval](https://flageval.baai.ac.cn/)
>
> The above results are based on 5-shot,MMLU/CMMLU/OCNLI results from [FlagEval](https://flageval.baai.ac.cn/)
# 协议/License Agreement
Mengzi3-13B-Base依照Apache 2.0协议开源,对学术研究完全开放,同时支持免费商用。如需申请商业许可证,请[联系我们](https://www.langboat.com/form?p=3),其他商务合作请联系[bd@langboat.com](mailto:bd@langboat.com)。
Mengzi3-13B-Base is open source under the Apache 2.0 protocol, fully open for academic research, and free for commercial use. If you need to apply for business license, please [contact us](https://www.langboat.com/en/form?p=3), other business cooperation, please contact [bd@langboat.com](mailto:bd@langboat.com).
|
LA1512/fine-tuned-distilbart-xsum-12-3-news-summary
|
LA1512
| 2024-04-10T18:26:56Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"en",
"base_model:sshleifer/distilbart-xsum-12-3",
"base_model:finetune:sshleifer/distilbart-xsum-12-3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-10T18:08:56Z |
---
license: apache-2.0
base_model: sshleifer/distilbart-xsum-12-3
tags:
- generated_from_trainer
metrics:
- rouge
- bertscore
model-index:
- name: results
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-3](https://huggingface.co/sshleifer/distilbart-xsum-12-3) on the News-summary-Kaggle dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1426
- Rouge1: 51.2701
- Rouge2: 28.3575
- Rougel: 37.9263
- Rougelsum: 45.8934
- Gen Len: 75.777
## Model description
This model use pre-trained model: sshleifer/distilbart-xsum-12-3 fined tuned on the datasets: News-summary-Kaggle. Our aims is to build model can summerize news efficiently.
## Intended uses & limitations
More information needed
## Training and evaluation data
News-summary. Link: https://www.kaggle.com/datasets/sunnysai12345/news-summary
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.4812 | 1.0 | 425 | 3.3209 | 47.7226 | 26.3282 | 35.5063 | 42.5426 | 66.523 |
| 3.2269 | 2.0 | 850 | 3.1838 | 50.4271 | 27.7047 | 37.2638 | 45.1897 | 77.115 |
| 2.9504 | 3.0 | 1275 | 3.1401 | 50.6362 | 28.2773 | 37.6 | 45.4901 | 74.992 |
| 2.8014 | 4.0 | 1700 | 3.1346 | 51.2942 | 28.4684 | 38.0877 | 46.0386 | 74.299 |
| 2.71 | 5.0 | 2125 | 3.1426 | 51.2701 | 28.3575 | 37.9263 | 45.8934 | 75.777 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
allknowingroger/NeuralDolphin-7B-slerp
|
allknowingroger
| 2024-04-10T18:24:31Z | 93 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"fterry/FofoNet-DolphinChat-slerp",
"vgorce/MarcoroNeuralChat-7B-slerp",
"base_model:fterry/FofoNet-DolphinChat-slerp",
"base_model:merge:fterry/FofoNet-DolphinChat-slerp",
"base_model:vgorce/MarcoroNeuralChat-7B-slerp",
"base_model:merge:vgorce/MarcoroNeuralChat-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-05T09:04:37Z |
---
tags:
- merge
- mergekit
- lazymergekit
- fterry/FofoNet-DolphinChat-slerp
- vgorce/MarcoroNeuralChat-7B-slerp
base_model:
- fterry/FofoNet-DolphinChat-slerp
- vgorce/MarcoroNeuralChat-7B-slerp
license: apache-2.0
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [fterry/FofoNet-DolphinChat-slerp](https://huggingface.co/fterry/FofoNet-DolphinChat-slerp)
* [vgorce/MarcoroNeuralChat-7B-slerp](https://huggingface.co/vgorce/MarcoroNeuralChat-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: fterry/FofoNet-DolphinChat-slerp
layer_range: [0, 32]
- model: vgorce/MarcoroNeuralChat-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: vgorce/MarcoroNeuralChat-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ymoslem/whisper-base-ga2en-v2.1
|
ymoslem
| 2024-04-10T18:23:07Z | 79 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-04-09T18:54:02Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: Whisper Base GA-EN Speech Translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base GA-EN Speech Translation
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
The best model (this version) is at checkpoint 1800, epoch 1.94, and it achieves the following results on the evaluation set:
- Loss: 1.6780
- Bleu: 22.52
- Chrf: 39.24
- Wer: 76.7222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:-----:|:-----:|:---------------:|:--------:|
| 2.4277 | 0.11 | 100 | 4.21 | 19.8 | 2.2931 | 144.9797 |
| 2.1912 | 0.22 | 200 | 6.79 | 22.82 | 2.0213 | 152.5439 |
| 1.9199 | 0.32 | 300 | 6.62 | 24.85 | 1.9041 | 180.0090 |
| 1.7525 | 0.43 | 400 | 13.14 | 30.42 | 1.8026 | 98.1090 |
| 1.6623 | 0.54 | 500 | 17.73 | 34.37 | 1.7467 | 90.5448 |
| 1.4937 | 0.65 | 600 | 16.85 | 33.97 | 1.7301 | 92.3458 |
| 1.3587 | 0.76 | 700 | 14.77 | 33.3 | 1.6499 | 101.7109 |
| 1.274 | 0.86 | 800 | 18.28 | 35.46 | 1.6641 | 89.1941 |
| 1.1514 | 0.97 | 900 | 21.17 | 37.05 | 1.6172 | 80.1441 |
| 0.6932 | 1.08 | 1000 | 16.81 | 35.35 | 1.6421 | 99.0095 |
| 0.8294 | 1.19 | 1100 | 1.6699| 18.49 | 36.78 | 90.9500 |
| 0.7662 | 1.29 | 1200 | 1.7147| 17.15 | 34.75 | 95.8577 |
| 0.7704 | 1.4 | 1300 | 1.6752| 15.65 | 35.08 | 104.2774 |
| 0.7333 | 1.51 | 1400 | 1.6812| 19.17 | 36.87 | 89.2841 |
| 0.6879 | 1.62 | 1500 | 1.6719| 19.09 | 37.98 | 84.6015 |
| 0.6297 | 1.73 | 1600 | 1.6847| 19.43 | 37.28 | 89.5092 |
| 0.5843 | 1.83 | 1700 | 1.6659| 17.74 | 38.08 | 98.1990 |
| 0.5342 | 1.94 | 1800 | 1.6780| 22.52 | 39.24 | 76.7222 |
| 0.2743 | 2.05 | 1900 | 1.7151| 22.48 | 39.05 | 78.8834 |
| 0.2932 | 2.16 | 2000 | 1.7044| 17.65 | 38.01 | 99.2796 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
el-filatova/Practica1
|
el-filatova
| 2024-04-10T18:20:32Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-03-03T12:56:31Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
allknowingroger/CalmExperiment-7B-slerp
|
allknowingroger
| 2024-04-10T18:20:06Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yam-peleg/Experiment26-7B",
"MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:merge:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:yam-peleg/Experiment26-7B",
"base_model:merge:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-06T15:28:10Z |
---
tags:
- merge
- mergekit
- lazymergekit
- yam-peleg/Experiment26-7B
- MaziyarPanahi/Calme-7B-Instruct-v0.9
base_model:
- yam-peleg/Experiment26-7B
- MaziyarPanahi/Calme-7B-Instruct-v0.9
license: apache-2.0
---
# CalmExperiment-7B-slerp
CalmExperiment-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/CalmExperiment-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/Mistraldouble-7B-task
|
allknowingroger
| 2024-04-10T18:18:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2",
"cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"conversational",
"base_model:MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2",
"base_model:merge:MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:merge:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-07T09:52:12Z |
---
tags:
- merge
- mergekit
- lazymergekit
- MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
base_model:
- MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
license: apache-2.0
---
# Mistraldouble-7B-task
Mistraldouble-7B-task is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
## 🧩 Configuration
```yaml
models:
- model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
parameters:
weight: 0.35
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
weight: 0.65
base_model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
merge_method: task_arithmetic
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Mistraldouble-7B-task"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
allknowingroger/AutoLimmy-7B-slerp
|
allknowingroger
| 2024-04-10T18:16:42Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger/YamshadowExperiment28-7B",
"liminerity/M7-7b",
"base_model:automerger/YamshadowExperiment28-7B",
"base_model:merge:automerger/YamshadowExperiment28-7B",
"base_model:liminerity/M7-7b",
"base_model:merge:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-08T08:04:29Z |
---
tags:
- merge
- mergekit
- lazymergekit
- automerger/YamshadowExperiment28-7B
- liminerity/M7-7b
base_model:
- automerger/YamshadowExperiment28-7B
- liminerity/M7-7b
license: apache-2.0
---
# AutoLimmy-7B-slerp
AutoLimmy-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
- model: liminerity/M7-7b
layer_range: [0, 32]
merge_method: slerp
base_model: automerger/YamshadowExperiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/AutoLimmy-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
thorirhrafn/sft_gpt7b_domar_pretuned
|
thorirhrafn
| 2024-04-10T18:09:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:AI-Sweden-Models/gpt-sw3-6.7b",
"base_model:adapter:AI-Sweden-Models/gpt-sw3-6.7b",
"license:other",
"region:us"
] | null | 2024-04-09T14:47:08Z |
---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: AI-Sweden-Models/gpt-sw3-6.7b
model-index:
- name: sft_gpt7b_domar_pretuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_gpt7b_domar_pretuned
This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-6.7b](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6679 | 0.02 | 500 | 1.6397 |
| 1.7258 | 0.04 | 1000 | 1.6302 |
| 1.6602 | 0.06 | 1500 | 1.6228 |
| 1.6731 | 0.08 | 2000 | 1.6173 |
| 1.7399 | 0.09 | 2500 | 1.6155 |
| 1.8291 | 0.11 | 3000 | 1.6106 |
| 1.7113 | 0.13 | 3500 | 1.6078 |
| 1.6768 | 0.15 | 4000 | 1.6037 |
| 1.7504 | 0.17 | 4500 | 1.6028 |
| 1.598 | 0.19 | 5000 | 1.6003 |
| 1.5689 | 0.21 | 5500 | 1.5974 |
| 1.6727 | 0.23 | 6000 | 1.5961 |
| 1.5689 | 0.25 | 6500 | 1.5952 |
| 1.6331 | 0.26 | 7000 | 1.5931 |
| 1.6459 | 0.28 | 7500 | 1.5921 |
| 1.6334 | 0.3 | 8000 | 1.5918 |
| 1.6803 | 0.32 | 8500 | 1.5894 |
| 1.6182 | 0.34 | 9000 | 1.5879 |
| 1.693 | 0.36 | 9500 | 1.5866 |
| 1.6276 | 0.38 | 10000 | 1.5857 |
| 1.612 | 0.4 | 10500 | 1.5859 |
| 1.6412 | 0.42 | 11000 | 1.5843 |
| 1.6827 | 0.43 | 11500 | 1.5824 |
| 1.584 | 0.45 | 12000 | 1.5826 |
| 1.591 | 0.47 | 12500 | 1.5817 |
| 1.6641 | 0.49 | 13000 | 1.5805 |
| 1.6555 | 0.51 | 13500 | 1.5799 |
| 1.689 | 0.53 | 14000 | 1.5798 |
| 1.6216 | 0.55 | 14500 | 1.5785 |
| 1.6271 | 0.57 | 15000 | 1.5780 |
| 1.6898 | 0.59 | 15500 | 1.5771 |
| 1.6752 | 0.6 | 16000 | 1.5762 |
| 1.5884 | 0.62 | 16500 | 1.5761 |
| 1.6094 | 0.64 | 17000 | 1.5755 |
| 1.5202 | 0.66 | 17500 | 1.5749 |
| 1.6506 | 0.68 | 18000 | 1.5744 |
| 1.6805 | 0.7 | 18500 | 1.5736 |
| 1.6421 | 0.72 | 19000 | 1.5732 |
| 1.652 | 0.74 | 19500 | 1.5731 |
| 1.5729 | 0.76 | 20000 | 1.5722 |
| 1.6231 | 0.77 | 20500 | 1.5715 |
| 1.6527 | 0.79 | 21000 | 1.5710 |
| 1.656 | 0.81 | 21500 | 1.5705 |
| 1.5076 | 0.83 | 22000 | 1.5708 |
| 1.6925 | 0.85 | 22500 | 1.5700 |
| 1.6761 | 0.87 | 23000 | 1.5701 |
| 1.6376 | 0.89 | 23500 | 1.5697 |
| 1.696 | 0.91 | 24000 | 1.5686 |
| 1.6921 | 0.93 | 24500 | 1.5688 |
| 1.6896 | 0.94 | 25000 | 1.5681 |
| 1.7896 | 0.96 | 25500 | 1.5678 |
| 1.6342 | 0.98 | 26000 | 1.5679 |
| 1.6001 | 1.0 | 26500 | 1.5679 |
| 1.7183 | 1.02 | 27000 | 1.5678 |
| 1.5685 | 1.04 | 27500 | 1.5675 |
| 1.5349 | 1.06 | 28000 | 1.5672 |
| 1.6439 | 1.08 | 28500 | 1.5677 |
| 1.6201 | 1.1 | 29000 | 1.5670 |
| 1.6209 | 1.11 | 29500 | 1.5664 |
| 1.5495 | 1.13 | 30000 | 1.5665 |
| 1.5573 | 1.15 | 30500 | 1.5661 |
| 1.6094 | 1.17 | 31000 | 1.5660 |
| 1.625 | 1.19 | 31500 | 1.5662 |
| 1.5404 | 1.21 | 32000 | 1.5656 |
| 1.547 | 1.23 | 32500 | 1.5655 |
| 1.5997 | 1.25 | 33000 | 1.5648 |
| 1.6287 | 1.27 | 33500 | 1.5651 |
| 1.4998 | 1.28 | 34000 | 1.5650 |
| 1.7069 | 1.3 | 34500 | 1.5642 |
| 1.5453 | 1.32 | 35000 | 1.5643 |
| 1.5378 | 1.34 | 35500 | 1.5640 |
| 1.5702 | 1.36 | 36000 | 1.5643 |
| 1.6593 | 1.38 | 36500 | 1.5641 |
| 1.4526 | 1.4 | 37000 | 1.5641 |
| 1.5875 | 1.42 | 37500 | 1.5635 |
| 1.7064 | 1.44 | 38000 | 1.5632 |
| 1.6517 | 1.45 | 38500 | 1.5629 |
| 1.5637 | 1.47 | 39000 | 1.5630 |
| 1.5557 | 1.49 | 39500 | 1.5632 |
| 1.6615 | 1.51 | 40000 | 1.5626 |
| 1.5869 | 1.53 | 40500 | 1.5629 |
| 1.6263 | 1.55 | 41000 | 1.5622 |
| 1.5958 | 1.57 | 41500 | 1.5624 |
| 1.5646 | 1.59 | 42000 | 1.5620 |
| 1.5605 | 1.61 | 42500 | 1.5620 |
| 1.5753 | 1.62 | 43000 | 1.5621 |
| 1.6315 | 1.64 | 43500 | 1.5618 |
| 1.6351 | 1.66 | 44000 | 1.5616 |
| 1.4516 | 1.68 | 44500 | 1.5615 |
| 1.6654 | 1.7 | 45000 | 1.5616 |
| 1.4796 | 1.72 | 45500 | 1.5613 |
| 1.7079 | 1.74 | 46000 | 1.5613 |
| 1.6877 | 1.76 | 46500 | 1.5613 |
| 1.5899 | 1.78 | 47000 | 1.5612 |
| 1.5419 | 1.79 | 47500 | 1.5609 |
| 1.5972 | 1.81 | 48000 | 1.5611 |
| 1.6402 | 1.83 | 48500 | 1.5609 |
| 1.6036 | 1.85 | 49000 | 1.5607 |
| 1.5839 | 1.87 | 49500 | 1.5607 |
| 1.6727 | 1.89 | 50000 | 1.5608 |
| 1.5385 | 1.91 | 50500 | 1.5605 |
| 1.5856 | 1.93 | 51000 | 1.5608 |
| 1.6168 | 1.95 | 51500 | 1.5604 |
| 1.5426 | 1.96 | 52000 | 1.5605 |
| 1.5768 | 1.98 | 52500 | 1.5603 |
| 1.519 | 2.0 | 53000 | 1.5606 |
| 1.615 | 2.02 | 53500 | 1.5607 |
| 1.6096 | 2.04 | 54000 | 1.5606 |
| 1.5881 | 2.06 | 54500 | 1.5604 |
| 1.5782 | 2.08 | 55000 | 1.5604 |
| 1.6988 | 2.1 | 55500 | 1.5604 |
| 1.6284 | 2.12 | 56000 | 1.5604 |
| 1.6219 | 2.13 | 56500 | 1.5605 |
| 1.5288 | 2.15 | 57000 | 1.5604 |
| 1.57 | 2.17 | 57500 | 1.5603 |
| 1.6524 | 2.19 | 58000 | 1.5605 |
| 1.5774 | 2.21 | 58500 | 1.5602 |
| 1.5434 | 2.23 | 59000 | 1.5601 |
| 1.4985 | 2.25 | 59500 | 1.5602 |
| 1.4937 | 2.27 | 60000 | 1.5602 |
| 1.5134 | 2.29 | 60500 | 1.5601 |
| 1.5064 | 2.3 | 61000 | 1.5601 |
| 1.6091 | 2.32 | 61500 | 1.5601 |
| 1.6257 | 2.34 | 62000 | 1.5600 |
| 1.6497 | 2.36 | 62500 | 1.5601 |
| 1.5469 | 2.38 | 63000 | 1.5599 |
| 1.5453 | 2.4 | 63500 | 1.5600 |
| 1.5256 | 2.42 | 64000 | 1.5599 |
| 1.5616 | 2.44 | 64500 | 1.5600 |
| 1.6449 | 2.46 | 65000 | 1.5600 |
| 1.6298 | 2.47 | 65500 | 1.5598 |
| 1.697 | 2.49 | 66000 | 1.5599 |
| 1.5351 | 2.51 | 66500 | 1.5598 |
| 1.5463 | 2.53 | 67000 | 1.5599 |
| 1.6256 | 2.55 | 67500 | 1.5598 |
| 1.5567 | 2.57 | 68000 | 1.5598 |
| 1.6036 | 2.59 | 68500 | 1.5599 |
| 1.5113 | 2.61 | 69000 | 1.5598 |
| 1.6975 | 2.63 | 69500 | 1.5598 |
| 1.69 | 2.64 | 70000 | 1.5599 |
| 1.5828 | 2.66 | 70500 | 1.5598 |
| 1.6462 | 2.68 | 71000 | 1.5598 |
| 1.5645 | 2.7 | 71500 | 1.5598 |
| 1.5385 | 2.72 | 72000 | 1.5599 |
| 1.6244 | 2.74 | 72500 | 1.5599 |
| 1.5805 | 2.76 | 73000 | 1.5599 |
| 1.6334 | 2.78 | 73500 | 1.5599 |
| 1.5254 | 2.8 | 74000 | 1.5598 |
| 1.5892 | 2.81 | 74500 | 1.5599 |
| 1.68 | 2.83 | 75000 | 1.5599 |
| 1.5866 | 2.85 | 75500 | 1.5598 |
| 1.5692 | 2.87 | 76000 | 1.5598 |
| 1.4843 | 2.89 | 76500 | 1.5598 |
| 1.633 | 2.91 | 77000 | 1.5598 |
| 1.6205 | 2.93 | 77500 | 1.5598 |
| 1.5802 | 2.95 | 78000 | 1.5598 |
| 1.5723 | 2.97 | 78500 | 1.5598 |
| 1.6153 | 2.98 | 79000 | 1.5598 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
mergekit-community/mergekit-slerp-jeyctse
|
mergekit-community
| 2024-04-10T18:07:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Equall/Saul-7B-Base",
"base_model:merge:Equall/Saul-7B-Base",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:merge:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T18:04:10Z |
---
base_model:
- HuggingFaceH4/zephyr-7b-beta
- Equall/Saul-Base
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Equall/Saul-Base
layer_range: [0, 32]
- model: HuggingFaceH4/zephyr-7b-beta
layer_range: [0, 32]
merge_method: slerp
base_model: HuggingFaceH4/zephyr-7b-beta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
sheshuan/distilbert-base-uncased-finetuned-subj_obj_1.1
|
sheshuan
| 2024-04-10T18:06:41Z | 117 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T17:31:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KevinNabuule/cassava-disease-classifier
|
KevinNabuule
| 2024-04-10T18:01:39Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-04-10T17:59:43Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_7_prob
|
RolMax
| 2024-04-10T17:57:31Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-04-10T17:57:27Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_7_prob
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_7_prob")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 851
* Number of training documents: 91420
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | du - ich - bin - hab - ja | 2419 | 0_du_ich_bin_hab |
| 1 | polizei - polizisten - polizeigewalt - polizist - messer | 913 | 1_polizei_polizisten_polizeigewalt_polizist |
| 2 | biolabore - biowaffen - biologische - biologischen - labore | 600 | 2_biolabore_biowaffen_biologische_biologischen |
| 3 | flüchtlinge - migranten - flüchtlingen - kriegsflüchtlinge - ukrainer | 803 | 3_flüchtlinge_migranten_flüchtlingen_kriegsflüchtlinge |
| 4 | deutschland - deutschen - deutsche - deutschlands - reich | 874 | 4_deutschland_deutschen_deutsche_deutschlands |
| 5 | weihnachten - weihnachtsfest - weihnachtsmann - lockdown - weihnachtszeit | 528 | 5_weihnachten_weihnachtsfest_weihnachtsmann_lockdown |
| 6 | mariupol - kiew - stadt - ukrainische - russische | 580 | 6_mariupol_kiew_stadt_ukrainische |
| 7 | video - youtube - schau - here - videos | 508 | 7_video_youtube_schau_here |
| 8 | trump - biden - hunter - joe - cia | 498 | 8_trump_biden_hunter_joe |
| 9 | hersh - nord - seymour - stream - pipelines | 361 | 9_hersh_nord_seymour_stream |
| 10 | öl - gas - russisches - russland - embargo | 686 | 10_öl_gas_russisches_russland |
| 11 | negativbewertungen - attacken - german - aufhalten - deutsch | 646 | 11_negativbewertungen_attacken_german_aufhalten |
| 12 | proteste - städten - corona - tausende - polizei | 432 | 12_proteste_städten_corona_tausende |
| 13 | 18 - uhr - rathaus - markt - 00 | 322 | 13_18_uhr_rathaus_markt |
| 14 | israel - juden - iran - jüdischen - jüdische | 309 | 14_israel_juden_iran_jüdischen |
| 15 | masken - maske - cochrane - tragen - maskenpflicht | 303 | 15_masken_maske_cochrane_tragen |
| 16 | krieg - trump - military - militär - ect | 426 | 16_krieg_trump_military_militär |
| 17 | impfung - impfungen - impfstoffe - impfstoff - nebenwirkungen | 909 | 17_impfung_impfungen_impfstoffe_impfstoff |
| 18 | frauen - gender - transgender - frau - geschlecht | 324 | 18_frauen_gender_transgender_frau |
| 19 | partioten - gruß - tägliche - politische - geoengineering | 258 | 19_partioten_gruß_tägliche_politische |
| 20 | erdbeben - türkei - syrien - beben - syrischen | 359 | 20_erdbeben_türkei_syrien_beben |
| 21 | strasse - 12 - folgt - 20 - mir | 430 | 21_strasse_12_folgt_20 |
| 22 | putin - putins - wladimir - waters - russischen | 609 | 22_putin_putins_wladimir_waters |
| 23 | weizen - getreide - mais - ag - export | 236 | 23_weizen_getreide_mais_ag |
| 24 | strasse - folgt - mir - heute - ravensburg | 252 | 24_strasse_folgt_mir_heute |
| 25 | telegram - gu - mail - newsmax - folgt | 282 | 25_telegram_gu_mail_newsmax |
| 26 | russen - russische - russischen - putin - russland | 824 | 26_russen_russische_russischen_putin |
| 27 | spd - berlin - giffey - wahl - cdu | 323 | 27_spd_berlin_giffey_wahl |
| 28 | english - greetings - patriots - reviews - means | 225 | 28_english_greetings_patriots_reviews |
| 29 | telegram - apple - app - geoblocking - android | 257 | 29_telegram_apple_app_geoblocking |
| 30 | medien - journalisten - journalismus - bundespressekonferenz - mainstream | 455 | 30_medien_journalisten_journalismus_bundespressekonferenz |
| 31 | richter - abs - urteil - bundesverfassungsgericht - gericht | 428 | 31_richter_abs_urteil_bundesverfassungsgericht |
| 32 | ukraine - krieg - ukrainischen - ukrainische - ukrainekrieg | 928 | 32_ukraine_krieg_ukrainischen_ukrainische |
| 33 | truth - exposed - reviews - attacks - means | 290 | 33_truth_exposed_reviews_attacks |
| 34 | kinder - stiko - impfung - kindern - impfkommission | 249 | 34_kinder_stiko_impfung_kindern |
| 35 | energiepreise - gas - strompreise - strom - rtv | 279 | 35_energiepreise_gas_strompreise_strom |
| 36 | lauterbach - karl - gesundheitsminister - lauterbachs - bundesgesundheitsminister | 199 | 36_lauterbach_karl_gesundheitsminister_lauterbachs |
| 37 | corona - maßnahmen - infektionsschutzgesetz - regeln - märz | 256 | 37_corona_maßnahmen_infektionsschutzgesetz_regeln |
| 38 | china - sanktionen - peking - flugzeuge - chinas | 178 | 38_china_sanktionen_peking_flugzeuge |
| 39 | pfizer - fda - nebenwirkungen - dokumente - arena | 169 | 39_pfizer_fda_nebenwirkungen_dokumente |
| 40 | ignazbearth - wertschätzung - me - läuft - paypal | 183 | 40_ignazbearth_wertschätzung_me_läuft |
| 41 | insekten - lebensmitteln - lebensmittel - essen - kremerskothen | 150 | 41_insekten_lebensmitteln_lebensmittel_essen |
| 42 | zerstückelung - versteckten - ruin - ziele - europas | 139 | 42_zerstückelung_versteckten_ruin_ziele |
| 43 | mrna - dna - pfizer - prozesse - entzündliche | 263 | 43_mrna_dna_pfizer_prozesse |
| 44 | sanktionen - russland - russischen - eu - russische | 314 | 44_sanktionen_russland_russischen_eu |
| 45 | neuestes - zerstörten - zerstückelung - versteckten - ruin | 197 | 45_neuestes_zerstörten_zerstückelung_versteckten |
| 46 | trinkwasser - wasser - gechlort - wasserversorgung - stadtwerke | 154 | 46_trinkwasser_wasser_gechlort_wasserversorgung |
| 47 | schulschließungen - schulen - schulleiterin - schule - gerichtshof | 217 | 47_schulschließungen_schulen_schulleiterin_schule |
| 48 | ballon - balloon - ballons - chinese - spionageballon | 184 | 48_ballon_balloon_ballons_chinese |
| 49 | patriots - greetings - personal - go - my | 209 | 49_patriots_greetings_personal_go |
| 50 | nord - stream - pipeline - ostsee - gaspipeline | 154 | 50_nord_stream_pipeline_ostsee |
| 51 | impfpflicht - österreich - ausgesetzt - verhältnismäßig - strafen | 198 | 51_impfpflicht_österreich_ausgesetzt_verhältnismäßig |
| 52 | putin - ukraine - russland - nato - russlands | 1211 | 52_putin_ukraine_russland_nato |
| 53 | gesellschaft - deine - menschen - du - dich | 319 | 53_gesellschaft_deine_menschen_du |
| 54 | klimawandel - erwärmung - klima - wetteradler - co2 | 198 | 54_klimawandel_erwärmung_klima_wetteradler |
| 55 | eva - liebe - andreas - grüße - lieber | 184 | 55_eva_liebe_andreas_grüße |
| 56 | schiff - häfen - schiffe - hafen - container | 161 | 56_schiff_häfen_schiffe_hafen |
| 57 | ärzte - ärztekammer - medizin - mediziner - patienten | 229 | 57_ärzte_ärztekammer_medizin_mediziner |
| 58 | impfpflicht - impfstreik - protest - berlin - pflegekräfte | 118 | 58_impfpflicht_impfstreik_protest_berlin |
| 59 | münchen - 12 - 2021 - wien - bielefeld | 243 | 59_münchen_12_2021_wien |
| 60 | leopard - panzer - kampfpanzer - lieferung - panzern | 144 | 60_leopard_panzer_kampfpanzer_lieferung |
| 61 | budapest - antifa - linksextremisten - ungarn - polizei | 130 | 61_budapest_antifa_linksextremisten_ungarn |
| 62 | wähler - partei - fdp - fpö - wahlen | 225 | 62_wähler_partei_fdp_fpö |
| 63 | chlorella - kurkuma - spirulina - bitterstoffe - dmso | 149 | 63_chlorella_kurkuma_spirulina_bitterstoffe |
| 64 | frieden - wagenknecht - brandenburger - schwarzer - tor | 191 | 64_frieden_wagenknecht_brandenburger_schwarzer |
| 65 | ufo - ufos - venus - aliens - alien | 158 | 65_ufo_ufos_venus_aliens |
| 66 | schulen - maske - masken - maskenpflicht - schüler | 147 | 66_schulen_maske_masken_maskenpflicht |
| 67 | einzelhandel - 2g - oberverwaltungsgericht - lüneburg - regel | 108 | 67_einzelhandel_2g_oberverwaltungsgericht_lüneburg |
| 68 | kirche - erzbischof - vigano - disney - kirchen | 153 | 68_kirche_erzbischof_vigano_disney |
| 69 | ganser - daniele - kilez - info - durchblättern | 250 | 69_ganser_daniele_kilez_info |
| 70 | maxwell - epstein - ghislaine - epsteins - andrew | 108 | 70_maxwell_epstein_ghislaine_epsteins |
| 71 | germany - devastated - my - end - latest | 142 | 71_germany_devastated_my_end |
| 72 | denk - 12 - denkanstoß - denkbrief - inbee | 205 | 72_denk_12_denkanstoß_denkbrief |
| 73 | autos - auto - elektroautos - pkw - verkehrswende | 141 | 73_autos_auto_elektroautos_pkw |
| 74 | türkei - turkey - chapter - türkiye - türken | 191 | 74_türkei_turkey_chapter_türkiye |
| 75 | neutralität - bismarck - österreichs - beitritt - schönhausen | 108 | 75_neutralität_bismarck_österreichs_beitritt |
| 76 | hefe - selbstversorgung - fertiggerichte - garten - pizza | 101 | 76_hefe_selbstversorgung_fertiggerichte_garten |
| 77 | lufthansa - flughafen - flughäfen - airport - frankfurt | 108 | 77_lufthansa_flughafen_flughäfen_airport |
| 78 | ignazbearth - wertschätzung - me - strasse - paypal | 131 | 78_ignazbearth_wertschätzung_me_strasse |
| 79 | stromausfall - strom - betroffen - haushalte - ausgefallen | 127 | 79_stromausfall_strom_betroffen_haushalte |
| 80 | frieden - aufkommens - beleidigender - unsachlicher - deaktivieren | 135 | 80_frieden_aufkommens_beleidigender_unsachlicher |
| 81 | impfpflicht - bundestag - impfpflichtgesetz - allgemeine - gesetzentwurf | 374 | 81_impfpflicht_bundestag_impfpflichtgesetz_allgemeine |
| 82 | erwachenbefreiung - eindecken - popcorn - zugreifen - medwedew | 97 | 82_erwachenbefreiung_eindecken_popcorn_zugreifen |
| 83 | tweet - twitter - ceiberweiber - tweets - account | 102 | 83_tweet_twitter_ceiberweiber_tweets |
| 84 | pflanzen - cannabis - permakultur - garten - sanddorn | 99 | 84_pflanzen_cannabis_permakultur_garten |
| 85 | merkel - angela - friedenspreis - unesco - merkels | 115 | 85_merkel_angela_friedenspreis_unesco |
| 86 | armut - krise - stillman - zuwanderung - verlustgeschäft | 184 | 86_armut_krise_stillman_zuwanderung |
| 87 | facebook - mittelerde - bertelsmann - faktenchecker - müller | 106 | 87_facebook_mittelerde_bertelsmann_faktenchecker |
| 88 | musk - starlink - elon - spacex - satelliten | 89 | 88_musk_starlink_elon_spacex |
| 89 | shenzhen - china - hongkong - qr - lockdown | 115 | 89_shenzhen_china_hongkong_qr |
| 90 | tschernobyl - saporischschja - iaea - kernkraftwerk - akw | 99 | 90_tschernobyl_saporischschja_iaea_kernkraftwerk |
| 91 | nesara - gesara - plan - lernst - biblisch | 137 | 91_nesara_gesara_plan_lernst |
| 92 | hitler - nazi - nazis - umerziehung - adolf | 214 | 92_hitler_nazi_nazis_umerziehung |
| 93 | grünen - grüne - grundrecht - schwarz - hoffmann | 164 | 93_grünen_grüne_grundrecht_schwarz |
| 94 | diesel - liter - benzin - euro - cent | 166 | 94_diesel_liter_benzin_euro |
| 95 | 5g - emf - mobilfunk - strahlung - 2004 | 96 | 95_5g_emf_mobilfunk_strahlung |
| 96 | schwachstelle - hacker - sicherheitslücke - log4j - bsi | 106 | 96_schwachstelle_hacker_sicherheitslücke_log4j |
| 97 | palma - la - vulkan - eruption - lava | 107 | 97_palma_la_vulkan_eruption |
| 98 | omikron - variante - südafrika - coronavirus - omicron | 261 | 98_omikron_variante_südafrika_coronavirus |
| 99 | versionen - crema - arabica - sicherheitsgefühl - bohne | 94 | 99_versionen_crema_arabica_sicherheitsgefühl |
| 100 | apolut - app - ios - huawei - kostenlose | 161 | 100_apolut_app_ios_huawei |
| 101 | verbrechen - tatsachenbehauptungen - menschlichkeit - folter - corona | 383 | 101_verbrechen_tatsachenbehauptungen_menschlichkeit_folter |
| 102 | cdl - cbdc - patentierbar - preiswert - krankheitserregenden | 86 | 102_cdl_cbdc_patentierbar_preiswert |
| 103 | barrel - ölpreis - öl - brent - ölpreise | 143 | 103_barrel_ölpreis_öl_brent |
| 104 | kinder - schule - eltern - löwenmamas - merith | 210 | 104_kinder_schule_eltern_löwenmamas |
| 105 | bitcoin - geldautomaten - kryptowährungen - krypto - sparkassen | 129 | 105_bitcoin_geldautomaten_kryptowährungen_krypto |
| 106 | hacker - cyber - cyberangriffe - bsi - cyberkrieg | 105 | 106_hacker_cyber_cyberangriffe_bsi |
| 107 | who - pandemievertrag - abkommen - mitgliedsstaaten - hpi | 107 | 107_who_pandemievertrag_abkommen_mitgliedsstaaten |
| 108 | löschautomatik - kochgelegenheit - ölpumpen - steuerungen - warmer | 101 | 108_löschautomatik_kochgelegenheit_ölpumpen_steuerungen |
| 109 | todesfälle - covid - kindern - 19 - nebenwirkungen | 218 | 109_todesfälle_covid_kindern_19 |
| 110 | ablage - holzstücken - nachgeschoben - briketts - holzkohle | 96 | 110_ablage_holzstücken_nachgeschoben_briketts |
| 111 | mega - wien - demo - menschenrechte - heldenplatz | 126 | 111_mega_wien_demo_menschenrechte |
| 112 | dresden - 1945 - dresdens - gedenken - schwarzer | 124 | 112_dresden_1945_dresdens_gedenken |
| 113 | grundrechte - mfg - neubeginn - freiheit - österreich | 91 | 113_grundrechte_mfg_neubeginn_freiheit |
| 114 | wasserstoff - 2030 - energien - erneuerbaren - kohleausstieg | 188 | 114_wasserstoff_2030_energien_erneuerbaren |
| 115 | atomkraft - atomkraftwerke - kraftwerk - kernkraftwerke - habeck | 92 | 115_atomkraft_atomkraftwerke_kraftwerk_kernkraftwerke |
| 116 | reitschuster - boris - zensurwelle - zensurfreien - empfiehlt | 89 | 116_reitschuster_boris_zensurwelle_zensurfreien |
| 117 | französisch - donner - küssel - strickenkurs - dreisprachig | 266 | 117_französisch_donner_küssel_strickenkurs |
| 118 | al - islamisten - syrien - islam - islamischen | 82 | 118_al_islamisten_syrien_islam |
| 119 | övp - niederösterreich - neuwahlen - österreicher - spö | 176 | 119_övp_niederösterreich_neuwahlen_österreicher |
| 120 | 00 - uhr - hauptplatz - 18 - 12 | 140 | 120_00_uhr_hauptplatz_18 |
| 121 | coronavirus - booster - impfung - impfungen - dosis | 221 | 121_coronavirus_booster_impfung_impfungen |
| 122 | germany - devastated - latest - end - my | 105 | 122_germany_devastated_latest_end |
| 123 | mutter - geburtshaus - schwangere - baby - babys | 118 | 123_mutter_geburtshaus_schwangere_baby |
| 124 | wien - 12 - demo - 2021 - menschenfischen | 159 | 124_wien_12_demo_2021 |
| 125 | quade - herr - minute - frag - wiedermal | 72 | 125_quade_herr_minute_frag |
| 126 | inflation - inflationsrate - prozent - preise - verbraucherpreise | 180 | 126_inflation_inflationsrate_prozent_preise |
| 127 | kanal - abonnieren - kanals - ausprobieren - bleibe | 80 | 127_kanal_abonnieren_kanals_ausprobieren |
| 128 | kinder - depressionen - jugendlichen - kindern - depressive | 154 | 128_kinder_depressionen_jugendlichen_kindern |
| 129 | rothschild - rothschilds - logen - familie - dynastie | 83 | 129_rothschild_rothschilds_logen_familie |
| 130 | jane - ruby - peters - stew - get | 110 | 130_jane_ruby_peters_stew |
| 131 | 02 - 2023 - 06 - 2023folgt - 13 | 94 | 131_02_2023_06_2023folgt |
| 132 | demonstrationen - demoteilnehmer - demo - friedfertigkeit - traiskirchen | 124 | 132_demonstrationen_demoteilnehmer_demo_friedfertigkeit |
| 133 | übersterblichkeit - schweden - sterbefallzahlen - mittleren - 2020 | 108 | 133_übersterblichkeit_schweden_sterbefallzahlen_mittleren |
| 134 | unterstütze - anmelden - rutter - newsletter - martin | 68 | 134_unterstütze_anmelden_rutter_newsletter |
| 135 | korsika - colonna - korsischen - yvan - sepp | 78 | 135_korsika_colonna_korsischen_yvan |
| 136 | freie - beitreten - medienarbeit - unabhängige - medien | 136 | 136_freie_beitreten_medienarbeit_unabhängige |
| 137 | regelmäßige - teilnehmer - ca - spaziergänge - spaziergang | 127 | 137_regelmäßige_teilnehmer_ca_spaziergänge |
| 138 | petromax - marcel - anymore - unfortunately - kritisches | 128 | 138_petromax_marcel_anymore_unfortunately |
| 139 | pharmaindustrie - pharma - medikamente - medikament - medizin | 192 | 139_pharmaindustrie_pharma_medikamente_medikament |
| 140 | satanisten - satanische - satan - satanismus - srce | 83 | 140_satanisten_satanische_satan_satanismus |
| 141 | demokratie - direkte - parteidiktatur - hilf - kretschmar | 103 | 141_demokratie_direkte_parteidiktatur_hilf |
| 142 | buch - charakter - tangsworld - hauptaspekte - bl | 140 | 142_buch_charakter_tangsworld_hauptaspekte |
| 143 | tool - taifun - philippinen - stk - lichte | 142 | 143_tool_taifun_philippinen_stk |
| 144 | mrna - impfstoffe - malone - erfinder - impfstoff | 139 | 144_mrna_impfstoffe_malone_erfinder |
| 145 | orf - gis - zwangsgebühren - ziegler - registriert | 60 | 145_orf_gis_zwangsgebühren_ziegler |
| 146 | lauterbach - karl - bkk - bundesgesundheitsminister - schöfbeck | 129 | 146_lauterbach_karl_bkk_bundesgesundheitsminister |
| 147 | supergesunde - dörrautomat - vielfältige - infrarot - geschmack | 73 | 147_supergesunde_dörrautomat_vielfältige_infrarot |
| 148 | magnesium - watford - magnesiummangel - bestellung - mineralstoff | 112 | 148_magnesium_watford_magnesiummangel_bestellung |
| 149 | vitamin - k2 - normalen - fettlösliches - ester | 135 | 149_vitamin_k2_normalen_fettlösliches |
| 150 | gold - goldpreis - dollar - goldmünzen - edelmetalle | 76 | 150_gold_goldpreis_dollar_goldmünzen |
| 151 | lula - helping - brighteon - bolsa - send | 114 | 151_lula_helping_brighteon_bolsa |
| 152 | afd - vorsitz - partei - democracy - parteitag | 134 | 152_afd_vorsitz_partei_democracy |
| 153 | sellner - postfach - monero - monatlich - unterstützen | 101 | 153_sellner_postfach_monero_monatlich |
| 154 | politiker - demokratie - politikern - standrechtlich - hinzurichten | 221 | 154_politiker_demokratie_politikern_standrechtlich |
| 155 | megawattstunde - erdgas - gaspreis - gas - gaspreise | 85 | 155_megawattstunde_erdgas_gaspreis_gas |
| 156 | bali - klima - klimaschützer - generation - klimakleber | 65 | 156_bali_klima_klimaschützer_generation |
| 157 | schlaf - nacht - schlafen - hellwach - wälzt | 63 | 157_schlaf_nacht_schlafen_hellwach |
| 158 | grad - schnee - wetter - wetterdienst - kälte | 67 | 158_grad_schnee_wetter_wetterdienst |
| 159 | chip - implantate - kramer - haut - chips | 87 | 159_chip_implantate_kramer_haut |
| 160 | rauch - gesundheitsminister - johannes - spahn - drosten | 201 | 160_rauch_gesundheitsminister_johannes_spahn |
| 161 | anwälte - bundesverfassungsgericht - karlsruhe - aufklärung - demonstration | 60 | 161_anwälte_bundesverfassungsgericht_karlsruhe_aufklärung |
| 162 | ohio - zug - raststätten - fahrer - tankkarten | 71 | 162_ohio_zug_raststätten_fahrer |
| 163 | convoy - trucker - konvoi - dc - washington | 72 | 163_convoy_trucker_konvoi_dc |
| 164 | erdbeben - fukushima - stärke - japan - schnee | 117 | 164_erdbeben_fukushima_stärke_japan |
| 165 | kimmich - joshua - kimmichs - zdf - fußball | 55 | 165_kimmich_joshua_kimmichs_zdf |
| 166 | vonovia - wohnungen - neubau - immobilien - wohnungsmarkt | 130 | 166_vonovia_wohnungen_neubau_immobilien |
| 167 | impfpflicht - verfassungswidrig - micropur - ungeimpfter - allgemeine | 199 | 167_impfpflicht_verfassungswidrig_micropur_ungeimpfter |
| 168 | bennett - chabad - israels - naftali - premier | 128 | 168_bennett_chabad_israels_naftali |
| 169 | kampfjets - soldaten - nato - raketen - lieferung | 170 | 169_kampfjets_soldaten_nato_raketen |
| 170 | sars - cov - impfdosen - untersuchten - virus | 125 | 170_sars_cov_impfdosen_untersuchten |
| 171 | kallistalk - artikel - weiterbildung - anmerkungen - logo | 174 | 171_kallistalk_artikel_weiterbildung_anmerkungen |
| 172 | fluchtrucksack - outdoor - grill - tannen - kiefernzapfen | 53 | 172_fluchtrucksack_outdoor_grill_tannen |
| 173 | mgfimc18zvif6dccixmqaap11tg4tf6acj - ltc - 0xf39bdfb41f639b82e3d2bf022828bc6394f533a3 - 3jvdnoywmb93hsrgk58zstuxg11pw9mksr - ada | 53 | 173_mgfimc18zvif6dccixmqaap11tg4tf6acj_ltc_0xf39bdfb41f639b82e3d2bf022828bc6394f533a3_3jvdnoywmb93hsrgk58zstuxg11pw9mksr |
| 174 | digitale - währung - zentralbanken - digitalen - dollar | 123 | 174_digitale_währung_zentralbanken_digitalen |
| 175 | balaton - deutschsprachigen - gemeinschaft - arslan - ben | 65 | 175_balaton_deutschsprachigen_gemeinschaft_arslan |
| 176 | angst - broschüre - risch - yale - barrington | 198 | 176_angst_broschüre_risch_yale |
| 177 | australien - camps - australia - prison - mappe | 57 | 177_australien_camps_australia_prison |
| 178 | spike - protein - spikeprotein - nukleokapsid - hirn | 64 | 178_spike_protein_spikeprotein_nukleokapsid |
| 179 | angst - panik - entwarnung - furcht - knopf | 166 | 179_angst_panik_entwarnung_furcht |
| 180 | montagsspaziergang - nürnberg - 12 - frieden - demo | 190 | 180_montagsspaziergang_nürnberg_12_frieden |
| 181 | mayerweck - psychopathen - psychedelika - dauerpropaganda - stockholmsyndrom | 80 | 181_mayerweck_psychopathen_psychedelika_dauerpropaganda |
| 182 | zusammengerolltes - flutkatastrophe - sticks - millimetern - diamanten | 89 | 182_zusammengerolltes_flutkatastrophe_sticks_millimetern |
| 183 | milch - wasserzugabe - trocknungsprozess - abgepackt - dehydrierte | 68 | 183_milch_wasserzugabe_trocknungsprozess_abgepackt |
| 184 | sturmfeuerzeug - legendäre - mütze - gegenleistung - preis | 78 | 184_sturmfeuerzeug_legendäre_mütze_gegenleistung |
| 185 | rt - sputnik - today - staatsmedien - inhalte | 68 | 185_rt_sputnik_today_staatsmedien |
| 186 | roxon - backmischungen - aramid - storm - feldbett | 57 | 186_roxon_backmischungen_aramid_storm |
| 187 | billa - chelsea - league - bull - riesentorlauf | 87 | 187_billa_chelsea_league_bull |
| 188 | freedom - remedies - healing - naturalnews - subject | 86 | 188_freedom_remedies_healing_naturalnews |
| 189 | reawaken - tour - sold - america - events | 58 | 189_reawaken_tour_sold_america |
| 190 | tesla - wse - grünheide - gigafactory - kubikmeter | 70 | 190_tesla_wse_grünheide_gigafactory |
| 191 | greenpass - eu - novaccinepassportsanywhere - europarat - gerichtshof | 89 | 191_greenpass_eu_novaccinepassportsanywhere_europarat |
| 192 | leer - vollkorn - lebensmitteldiscounter - dosenbrot - überschwemmung | 65 | 192_leer_vollkorn_lebensmitteldiscounter_dosenbrot |
| 193 | mama - kinder - mutter - kind - eltern | 125 | 193_mama_kinder_mutter_kind |
| 194 | märz - dinar - anleihen - ethereum - wechselkurs | 101 | 194_märz_dinar_anleihen_ethereum |
| 195 | taschenmesser - forester - funktionen - victorinox - holzsäge | 91 | 195_taschenmesser_forester_funktionen_victorinox |
| 196 | linz - 12 - 2021 - 20 - gartz | 140 | 196_linz_12_2021_20 |
| 197 | gesundheitspersonal - lautstark - warnstreik - ärztekammer - impfzwang | 92 | 197_gesundheitspersonal_lautstark_warnstreik_ärztekammer |
| 198 | norbert - qr - maps - ai - schwarzer | 96 | 198_norbert_qr_maps_ai |
| 199 | tornados - kentucky - beshear - tornado - bundesstaat | 50 | 199_tornados_kentucky_beshear_tornado |
| 200 | bbc - borrell - josep - zensur - zusätze | 76 | 200_bbc_borrell_josep_zensur |
| 201 | lichtgrüße - lichtgrüsse - engmaschiger - lebensmonat - dritt | 58 | 201_lichtgrüße_lichtgrüsse_engmaschiger_lebensmonat |
| 202 | oven - dutch - mah - powerbank - destille | 65 | 202_oven_dutch_mah_powerbank |
| 203 | wissenschaft - wissenschaftler - jackson - gegnerischen - kritik | 83 | 203_wissenschaft_wissenschaftler_jackson_gegnerischen |
| 204 | mückstein - rücktritt - gesundheitsminister - wolfgang - tritt | 63 | 204_mückstein_rücktritt_gesundheitsminister_wolfgang |
| 205 | cum - warburg - ex - scholz - hamburger | 106 | 205_cum_warburg_ex_scholz |
| 206 | russische - russiagate - rt - russischen - zensur | 94 | 206_russische_russiagate_rt_russischen |
| 207 | menschenfeinde - verschwörungstheorie - verschwörungstheorien - verschwörungstheoretiker - lügen | 156 | 207_menschenfeinde_verschwörungstheorie_verschwörungstheorien_verschwörungstheoretiker |
| 208 | rezession - wirtschaft - autoindustrie - raiffeisen - ökonomen | 192 | 208_rezession_wirtschaft_autoindustrie_raiffeisen |
| 209 | impfpflicht - impfstoff - allgemeine - 2029 - impfung | 200 | 209_impfpflicht_impfstoff_allgemeine_2029 |
| 210 | gaskartuschen - greetings - patriots - personal - go | 111 | 210_gaskartuschen_greetings_patriots_personal |
| 211 | ministerin - minister - plagiats - zadic - amt | 131 | 211_ministerin_minister_plagiats_zadic |
| 212 | grosz - gerald - geraldgrosz - oe24 - com | 63 | 212_grosz_gerald_geraldgrosz_oe24 |
| 213 | pcr - test - tests - hesch - cdc | 71 | 213_pcr_test_tests_hesch |
| 214 | österreich - adnet - clock - 11 - bloss | 145 | 214_österreich_adnet_clock_11 |
| 215 | stew - content - manly - activism - clinic | 48 | 215_stew_content_manly_activism |
| 216 | gunnar - kaiser - river - kaisertv - kanalmitgliedschaft | 114 | 216_gunnar_kaiser_river_kaisertv |
| 217 | maskenpflicht - taxifahrer - maske - vermummungsverbot - tragen | 50 | 217_maskenpflicht_taxifahrer_maske_vermummungsverbot |
| 218 | regierenden - regime - linken - ständig - gesellschaft | 223 | 218_regierenden_regime_linken_ständig |
| 219 | hallo - meinung - überweisung - 7605 - 0013 | 50 | 219_hallo_meinung_überweisung_7605 |
| 220 | karmasin - sophie - calvez - familienministerin - övp | 67 | 220_karmasin_sophie_calvez_familienministerin |
| 221 | dollar - milliarden - millionen - eu - euro | 72 | 221_dollar_milliarden_millionen_eu |
| 222 | perspektiven - arge - veranstalten - andauernde - bundeskazleramt | 96 | 222_perspektiven_arge_veranstalten_andauernde |
| 223 | dr - facharzt - univ - evidenzbasierte - prof | 70 | 223_dr_facharzt_univ_evidenzbasierte |
| 224 | emf - c60evo - code - evui - discount | 73 | 224_emf_c60evo_code_evui |
| 225 | rassismus - hautfarbe - white - schwarze - weißer | 51 | 225_rassismus_hautfarbe_white_schwarze |
| 226 | nato - usa - guterres - morales - krieg | 161 | 226_nato_usa_guterres_morales |
| 227 | transhumanismus - magnet - stefan - transhumanisten - auflage | 46 | 227_transhumanismus_magnet_stefan_transhumanisten |
| 228 | polens - verschwiegene - compact - geschichtsheft - schuld | 72 | 228_polens_verschwiegene_compact_geschichtsheft |
| 229 | eu - milliarden - sondervermögen - bundeswehr - scholz | 131 | 229_eu_milliarden_sondervermögen_bundeswehr |
| 230 | novavax - impfstoff - totimpfstoff - körperzellen - bahner | 94 | 230_novavax_impfstoff_totimpfstoff_körperzellen |
| 231 | fleisch - bäckereien - energieverbrauch - zutaten - brot | 160 | 231_fleisch_bäckereien_energieverbrauch_zutaten |
| 232 | versammlungen - leibnitz - 00 - 17 - aktionstag | 66 | 232_versammlungen_leibnitz_00_17 |
| 233 | mittelerde - müller - bodensafe - joachim - soul | 71 | 233_mittelerde_müller_bodensafe_joachim |
| 234 | alc - nano - lipide - lipid - 0315 | 66 | 234_alc_nano_lipide_lipid |
| 235 | 2029 - corona - verträge - omikron - coronavirus | 153 | 235_2029_corona_verträge_omikron |
| 236 | investieren - verdienen - senden - adresse - investition | 45 | 236_investieren_verdienen_senden_adresse |
| 237 | habeck - wirtschaftsminister - robert - reduction - grüne | 104 | 237_habeck_wirtschaftsminister_robert_reduction |
| 238 | covid - ivermectin - 19 - patienten - beatmung | 147 | 238_covid_ivermectin_19_patienten |
| 239 | intensivbetten - betten - intensivstationen - patienten - divi | 112 | 239_intensivbetten_betten_intensivstationen_patienten |
| 240 | kaliningrad - domizil - tagesaktuell - subjektiv - informationsagentur | 64 | 240_kaliningrad_domizil_tagesaktuell_subjektiv |
| 241 | virus - pandemie - variante - omikron - drosten | 274 | 241_virus_pandemie_variante_omikron |
| 242 | ukrainerin - vergewaltigt - tunesier - kistel - javid | 85 | 242_ukrainerin_vergewaltigt_tunesier_kistel |
| 243 | geschützt - niedrigstand - lagerbestand - eco - begrenzter | 44 | 243_geschützt_niedrigstand_lagerbestand_eco |
| 244 | zensur - blockierungen - qualitätssiegel - löschungen - strikes | 99 | 244_zensur_blockierungen_qualitätssiegel_löschungen |
| 245 | wintergrillen - anziehungspunkt - gartenparty - hingucker - tannen | 70 | 245_wintergrillen_anziehungspunkt_gartenparty_hingucker |
| 246 | patienten - ecmo - krankenhaus - corona - rki | 166 | 246_patienten_ecmo_krankenhaus_corona |
| 247 | innsbruck - tiroler - petzl - sabine - tageszeitung | 53 | 247_innsbruck_tiroler_petzl_sabine |
| 248 | österreich - österreicher - österreichische - krisensicherheitsgesetz - inszenierten | 172 | 248_österreich_österreicher_österreichische_krisensicherheitsgesetz |
| 249 | filter - wasserfilter - hohlfaser - zuverlässigste - verschmutzten | 44 | 249_filter_wasserfilter_hohlfaser_zuverlässigste |
| 250 | zoo - kinder - hannover - impft - düsseldorf | 87 | 250_zoo_kinder_hannover_impft |
| 251 | kaffee - guayusa - edelstahl - trinkbecher - natural | 66 | 251_kaffee_guayusa_edelstahl_trinkbecher |
| 252 | hyundai - stromgenerator - inverter - fortschrittlichen - mobil | 44 | 252_hyundai_stromgenerator_inverter_fortschrittlichen |
| 253 | gold - mehrwertsteuer - edelmetallen - rubel - goldbarren | 56 | 253_gold_mehrwertsteuer_edelmetallen_rubel |
| 254 | medizin - buch - naturheilkunde - bänden - book | 100 | 254_medizin_buch_naturheilkunde_bänden |
| 255 | orban - ungarn - viktor - ungarische - orbán | 51 | 255_orban_ungarn_viktor_ungarische |
| 256 | corona - aufarbeitung - maßnahmen - risikobewertung - gebauer | 155 | 256_corona_aufarbeitung_maßnahmen_risikobewertung |
| 257 | live - streamen - twitch - lbry - kanälen | 48 | 257_live_streamen_twitch_lbry |
| 258 | stromnetz - netz - stromversorgung - stromnetze - kadri | 53 | 258_stromnetz_netz_stromversorgung_stromnetze |
| 259 | antarktis - antarctica - spielfilmen - byrd - zukommt | 83 | 259_antarktis_antarctica_spielfilmen_byrd |
| 260 | zelensky - kireev - denis - ukrainischen - ukrainische | 200 | 260_zelensky_kireev_denis_ukrainischen |
| 261 | netzfund - app - apolut - link - standalone | 66 | 261_netzfund_app_apolut_link |
| 262 | cdu - maaßen - merz - georg - friedrich | 68 | 262_cdu_maaßen_merz_georg |
| 263 | selenskyj - wolodymyr - berlusconi - vigano - präsident | 129 | 263_selenskyj_wolodymyr_berlusconi_vigano |
| 264 | streik - dich - organisiertes - streikpotenzial - profilnamen | 57 | 264_streik_dich_organisiertes_streikpotenzial |
| 265 | video - hommage - kontaktadresse - frank - köstler | 133 | 265_video_hommage_kontaktadresse_frank |
| 266 | praktisch - eintopfofen - suppen - emaillierten - eintopfgerichte | 43 | 266_praktisch_eintopfofen_suppen_emaillierten |
| 267 | frieden - selbstbestimmung - 02 - freiheit - 2023 | 69 | 267_frieden_selbstbestimmung_02_freiheit |
| 268 | betet - resort - wien - stock - mittwochs | 44 | 268_betet_resort_wien_stock |
| 269 | manuka - honig - propolis - schmerzen - bedrop | 88 | 269_manuka_honig_propolis_schmerzen |
| 270 | catherine - thurner - catherines - kanalinfo - sendungen | 95 | 270_catherine_thurner_catherines_kanalinfo |
| 271 | tagesreport - livestreams - dlive - podcasts - attkisson | 88 | 271_tagesreport_livestreams_dlive_podcasts |
| 272 | music - discord - contribution - brown - minds | 41 | 272_music_discord_contribution_brown |
| 273 | foto - unregierbar - markel - madonna - prof | 79 | 273_foto_unregierbar_markel_madonna |
| 274 | duran - 0xd449694348b1d618eca2829bbc901782f5172689 - exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r - emc2 - hex | 49 | 274_duran_0xd449694348b1d618eca2829bbc901782f5172689_exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r_emc2 |
| 275 | oli - kanalmitglied - spende - partioten - gruß | 98 | 275_oli_kanalmitglied_spende_partioten |
| 276 | nostradamus - jahr - 2021 - 2026 - rebellin | 133 | 276_nostradamus_jahr_2021_2026 |
| 277 | heizgerätes - eingebaute - sauerstoffmangelsicherung - gasdruckregler - katalyt | 46 | 277_heizgerätes_eingebaute_sauerstoffmangelsicherung_gasdruckregler |
| 278 | außerparlamentarischer - videos - yt - untersuchungsausschuss - vernetzt | 45 | 278_außerparlamentarischer_videos_yt_untersuchungsausschuss |
| 279 | komfortabel - kistenschleppen - lästiges - sonderpreis - erhältlich | 79 | 279_komfortabel_kistenschleppen_lästiges_sonderpreis |
| 280 | kvachkov - corona - oberst - 22 - plausible | 68 | 280_kvachkov_corona_oberst_22 |
| 281 | krankenhaus - pflegekräfte - ärzte - patienten - krankenhäuser | 115 | 281_krankenhaus_pflegekräfte_ärzte_patienten |
| 282 | infrarot - konservierungsmethode - geschmackserlebnisse - sonderpreis - dörrautomat | 74 | 282_infrarot_konservierungsmethode_geschmackserlebnisse_sonderpreis |
| 283 | impfpflicht - ärzte - kliniken - exodus - impfgegner | 322 | 283_impfpflicht_ärzte_kliniken_exodus |
| 284 | stuttgartgrundgesetzdemos - stuttgart - freiepressesauerland - überalldeutschlandweit - aufstehn | 147 | 284_stuttgartgrundgesetzdemos_stuttgart_freiepressesauerland_überalldeutschlandweit |
| 285 | trinkwasserqualität - rt - maximale - wasserkisten - testzwang | 54 | 285_trinkwasserqualität_rt_maximale_wasserkisten |
| 286 | sönnichsen - freispruch - prozess - andreas - amtsanmaßung | 156 | 286_sönnichsen_freispruch_prozess_andreas |
| 287 | hervorzuheben - geräuschlose - innenräumen - profi - verwendung | 40 | 287_hervorzuheben_geräuschlose_innenräumen_profi |
| 288 | flugobjekt - abgeschossen - alaska - flugobjekte - objekt | 114 | 288_flugobjekt_abgeschossen_alaska_flugobjekte |
| 289 | assange - julian - wikileaks - julianassange - auslieferung | 45 | 289_assange_julian_wikileaks_julianassange |
| 290 | ecoflow - tragbare - patentierter - herkömmliche - powerstations | 56 | 290_ecoflow_tragbare_patentierter_herkömmliche |
| 291 | freiheit - arendt - hannah - eigenverantwortlichkeit - wiedergegeben | 75 | 291_freiheit_arendt_hannah_eigenverantwortlichkeit |
| 292 | steuern - quellensteuer - staat - mehrwertsteuer - konsumsteuer | 130 | 292_steuern_quellensteuer_staat_mehrwertsteuer |
| 293 | australien - politiker - bernie - sturgeon - cochrane | 66 | 293_australien_politiker_bernie_sturgeon |
| 294 | ärztekammer - szekeres - brief - schulärztin - offenen | 65 | 294_ärztekammer_szekeres_brief_schulärztin |
| 295 | würdest - möchtest - freiwillige - helfen - spende | 39 | 295_würdest_möchtest_freiwillige_helfen |
| 296 | angesagteste - schickeria - paypal - de97100110012620193011 - betteln | 42 | 296_angesagteste_schickeria_paypal_de97100110012620193011 |
| 297 | kohn - stephan - gericht - wirtschaftsbereiche - justiz | 86 | 297_kohn_stephan_gericht_wirtschaftsbereiche |
| 298 | visa - mastercard - rubel - karten - russische | 135 | 298_visa_mastercard_rubel_karten |
| 299 | bitcoin - moral - unmoral - henry - seil | 75 | 299_bitcoin_moral_unmoral_henry |
| 300 | neuestes - zerstörten - boden - mäckle - wasserstandsmeldung | 81 | 300_neuestes_zerstörten_boden_mäckle |
| 301 | msm - familie - homepage - kugelschreiber - geeignet | 69 | 301_msm_familie_homepage_kugelschreiber |
| 302 | kliniken - krankenhäuser - corona - freihaltepauschalen - überlastung | 154 | 302_kliniken_krankenhäuser_corona_freihaltepauschalen |
| 303 | vorzubauen - rechtzeitig - zensurfreien - tragen - auf1 | 52 | 303_vorzubauen_rechtzeitig_zensurfreien_tragen |
| 304 | 0550 - sparkassede88 - 6010 - twitterusa - 1501 | 39 | 304_0550_sparkassede88_6010_twitterusa |
| 305 | polizisten - rechtsgrundsätzen - polizei - erfüllungsgehilfen - verbieten | 97 | 305_polizisten_rechtsgrundsätzen_polizei_erfüllungsgehilfen |
| 306 | geistesblitze - eidenberger - emotionen - leben - tangsworld | 207 | 306_geistesblitze_eidenberger_emotionen_leben |
| 307 | profilnamen - emoji - schliess - platziert - eintragen | 39 | 307_profilnamen_emoji_schliess_platziert |
| 308 | budapest - ungarn - bettinalube - telegramzur - ignazbearth | 48 | 308_budapest_ungarn_bettinalube_telegramzur |
| 309 | preise - teurer - butter - produkte - prozent | 60 | 309_preise_teurer_butter_produkte |
| 310 | nachtragshaushalt - schulden - milliarden - lindner - loan | 102 | 310_nachtragshaushalt_schulden_milliarden_lindner |
| 311 | lion - media - de32100110012624879184 - kontoverbindung - inhaber | 60 | 311_lion_media_de32100110012624879184_kontoverbindung |
| 312 | zeitung - berliner - lobbyarbeit - artikel - rechtsbruch | 147 | 312_zeitung_berliner_lobbyarbeit_artikel |
| 313 | protest - patriotische - symbol - freiheit - banner | 129 | 313_protest_patriotische_symbol_freiheit |
| 314 | wärme - heizung - sorgt - wohlige - stromausfalls | 38 | 314_wärme_heizung_sorgt_wohlige |
| 315 | klinik - autokraten - gestörten - patienten - geistig | 53 | 315_klinik_autokraten_gestörten_patienten |
| 316 | verbinde - punkte - sunny - neugier - jugendlicher | 44 | 316_verbinde_punkte_sunny_neugier |
| 317 | samstag - 12 - freiheit - heldenplatz - your | 129 | 317_samstag_12_freiheit_heldenplatz |
| 318 | anne - spiegel - bundesfamilienministerin - rheinland - flutkatastrophe | 38 | 318_anne_spiegel_bundesfamilienministerin_rheinland |
| 319 | powerbank - silikonkappe - charging - netzunabhängige - spritzwassergeschützte | 38 | 319_powerbank_silikonkappe_charging_netzunabhängige |
| 320 | ukrainische - nazis - ukrainischen - 2014 - ukraine | 208 | 320_ukrainische_nazis_ukrainischen_2014 |
| 321 | aktiendepot - kryptos - moral - etoro - consorsbank | 72 | 321_aktiendepot_kryptos_moral_etoro |
| 322 | kolloidales - silber - bakterien - pilze - meistverkaufte | 92 | 322_kolloidales_silber_bakterien_pilze |
| 323 | katastrophe - ahrtal - flutkatastrophe - rheinland - flut | 53 | 323_katastrophe_ahrtal_flutkatastrophe_rheinland |
| 324 | freundinnen - braun - roman - jaco - courses | 63 | 324_freundinnen_braun_roman_jaco |
| 325 | cbdc - id - sucharit - bhakdi - digital | 38 | 325_cbdc_id_sucharit_bhakdi |
| 326 | aktivisten - rechtsextremismus - demonstrationen - zensurdurchbruch - vorbeizuschauen | 101 | 326_aktivisten_rechtsextremismus_demonstrationen_zensurdurchbruch |
| 327 | müller - mittelerde - joachim - sammelband - feindbild | 77 | 327_müller_mittelerde_joachim_sammelband |
| 328 | bye - kalcker - biophysiker - behandlung - handhabende | 64 | 328_bye_kalcker_biophysiker_behandlung |
| 329 | mig - kampfjets - polen - 29 - ramstein | 85 | 329_mig_kampfjets_polen_29 |
| 330 | danke - gegenuni - weihnachtsaktion - dankeschön - weihnachtsgeschenk | 70 | 330_danke_gegenuni_weihnachtsaktion_dankeschön |
| 331 | kennedy - fauci - anthony - jr - lawrie | 45 | 331_kennedy_fauci_anthony_jr |
| 332 | 850 - fc - funkgerät - allwetter - wasserdichtes | 47 | 332_850_fc_funkgerät_allwetter |
| 333 | ohio - chemikalien - entgleisung - vinylchlorid - giftigen | 50 | 333_ohio_chemikalien_entgleisung_vinylchlorid |
| 334 | nährstoffe - bioverfügbarkeit - absorptionsrate - phospholipid - doppelschicht | 37 | 334_nährstoffe_bioverfügbarkeit_absorptionsrate_phospholipid |
| 335 | gott - jesus - gottes - huxley - jesu | 55 | 335_gott_jesus_gottes_huxley |
| 336 | bakterien - pilzeauch - viren - mundspülungen - gurgeln | 37 | 336_bakterien_pilzeauch_viren_mundspülungen |
| 337 | brücken - autobahn - fahrverbote - tempo - autobahnen | 47 | 337_brücken_autobahn_fahrverbote_tempo |
| 338 | videokanal - rebell - nachrichtenkanal - aufklärungsvideos - hilfreiche | 51 | 338_videokanal_rebell_nachrichtenkanal_aufklärungsvideos |
| 339 | edeka - regale - produkte - händler - regalen | 59 | 339_edeka_regale_produkte_händler |
| 340 | infrastruktur - feuerwehr - kritische - omikron - grundversorgung | 73 | 340_infrastruktur_feuerwehr_kritische_omikron |
| 341 | lichte - erkenntnisquelle - reinkarnation - dreibändige - sinnzusammenhänge | 55 | 341_lichte_erkenntnisquelle_reinkarnation_dreibändige |
| 342 | erde - giuliana - asteroid - conforto - komet | 54 | 342_erde_giuliana_asteroid_conforto |
| 343 | blackout - rwe - telekom - notrationnimmt - jahredieses | 141 | 343_blackout_rwe_telekom_notrationnimmt |
| 344 | impfpflicht - impfung - allgemeine - impfzwanges - belogen | 132 | 344_impfpflicht_impfung_allgemeine_impfzwanges |
| 345 | gutenmorgen - wochenende - exxtrafrüh - wünscht - tompos | 83 | 345_gutenmorgen_wochenende_exxtrafrüh_wünscht |
| 346 | gates - bill - melinda - epstein - jeffrey | 38 | 346_gates_bill_melinda_epstein |
| 347 | hochschulen - 2g - verwaltungsgerichtshof - vgh - mannheim | 39 | 347_hochschulen_2g_verwaltungsgerichtshof_vgh |
| 348 | euro - schreyer - grundsteuer - bargeld - commentary | 208 | 348_euro_schreyer_grundsteuer_bargeld |
| 349 | biden - joe - xi - us - usa | 242 | 349_biden_joe_xi_us |
| 350 | lauterbach - linien - rote - kartenhaus - stürzt | 74 | 350_lauterbach_linien_rote_kartenhaus |
| 351 | strohmeier - stauraum - natascha - zeltvordach - schlafbereich | 57 | 351_strohmeier_stauraum_natascha_zeltvordach |
| 352 | faeser - hessen - nancy - bundesinnenministerin - kandidatur | 78 | 352_faeser_hessen_nancy_bundesinnenministerin |
| 353 | sahin - ugur - krankenkassen - arbeitsplatz - arbeiten | 142 | 353_sahin_ugur_krankenkassen_arbeitsplatz |
| 354 | strohmeier - 6713 - aspkat2lxxx - 0058 - 0321 | 54 | 354_strohmeier_6713_aspkat2lxxx_0058 |
| 355 | mückstein - schuldiges - putzt - österreich - regierung | 70 | 355_mückstein_schuldiges_putzt_österreich |
| 356 | brauche - liebe - abendgebet - kennen - lieben | 126 | 356_brauche_liebe_abendgebet_kennen |
| 357 | fed - notenbank - ezb - leitzins - zinserhöhung | 41 | 357_fed_notenbank_ezb_leitzins |
| 358 | schöning - weltordnung - abendlandes - heiko - oswald | 192 | 358_schöning_weltordnung_abendlandes_heiko |
| 359 | laterne - proxy - kompaktes - hifi - mikrowelle | 65 | 359_laterne_proxy_kompaktes_hifi |
| 360 | hamburg - münchen - strasse - madrid - düsseldorf | 118 | 360_hamburg_münchen_strasse_madrid |
| 361 | bolsonaro - cyberangriff - websites - israelische - brasilianischen | 59 | 361_bolsonaro_cyberangriff_websites_israelische |
| 362 | faschismus - ensslin - raf - weltgeschichte - gudrun | 184 | 362_faschismus_ensslin_raf_weltgeschichte |
| 363 | helping - brighteon - labeling - avoidance - contamination | 48 | 363_helping_brighteon_labeling_avoidance |
| 364 | locals - duran - community - reviews - attacks | 95 | 364_locals_duran_community_reviews |
| 365 | kallistalk - danke - dank - voraus - vielen | 84 | 365_kallistalk_danke_dank_voraus |
| 366 | catherine - marc - vimeo - frank - gehackt | 67 | 366_catherine_marc_vimeo_frank |
| 367 | umdrehungen - handgenerators - einsatzfähig - schlechtem - genügen | 35 | 367_umdrehungen_handgenerators_einsatzfähig_schlechtem |
| 368 | rt - rtl - language - face - talpa | 67 | 368_rt_rtl_language_face |
| 369 | ecoflow - tragbare - elektrowerkzeuge - patentierter - haushaltsgeräte | 35 | 369_ecoflow_tragbare_elektrowerkzeuge_patentierter |
| 370 | hauptbahnhof - graz - kundgebung - 13 - nähere | 75 | 370_hauptbahnhof_graz_kundgebung_13 |
| 371 | haushalt - einfachste - person - monat - 0xd449694348b1d618eca2829bbc901782f5172689 | 35 | 371_haushalt_einfachste_person_monat |
| 372 | kommunismus - sozialismus - kommunistische - sexualisierung - ideologie | 60 | 372_kommunismus_sozialismus_kommunistische_sexualisierung |
| 373 | agüero - herzstillstand - fußball - lindelof - sergio | 64 | 373_agüero_herzstillstand_fußball_lindelof |
| 374 | song - lied - baby - songs - single | 90 | 374_song_lied_baby_songs |
| 375 | galgant - preta - terra - posch - maca | 110 | 375_galgant_preta_terra_posch |
| 376 | akku - weltempfang - weltempfänger - unersetzlichen - universalradio | 41 | 376_akku_weltempfang_weltempfänger_unersetzlichen |
| 377 | apartheid - südafrika - schäbiger - unmenschlichem - erbarmungsloser | 51 | 377_apartheid_südafrika_schäbiger_unmenschlichem |
| 378 | inflation - ezb - geldpolitik - notenbanken - zinsen | 73 | 378_inflation_ezb_geldpolitik_notenbanken |
| 379 | geld - inflation - schritt - enteignen - exklusivem | 167 | 379_geld_inflation_schritt_enteignen |
| 380 | ausgängen - usb - aufladung - netzsteckdosen - kfz | 35 | 380_ausgängen_usb_aufladung_netzsteckdosen |
| 381 | wiederzuentdecken - unzulänglichkeiten - gelegenheiten - verborgene - fortschritt | 72 | 381_wiederzuentdecken_unzulänglichkeiten_gelegenheiten_verborgene |
| 382 | niedersachsen - aktionen - freieniedersachsen - übersicht - info | 54 | 382_niedersachsen_aktionen_freieniedersachsen_übersicht |
| 383 | neutralität - österreich - bellen - nato - van | 136 | 383_neutralität_österreich_bellen_nato |
| 384 | auf1 - wenko - vivien - vogt - janotka | 78 | 384_auf1_wenko_vivien_vogt |
| 385 | spannring - getreidetonnen - plombierbarem - gewickeltem - füllguts | 53 | 385_spannring_getreidetonnen_plombierbarem_gewickeltem |
| 386 | willst - kostenfrei - freiheit - sklaven - sklaverei | 72 | 386_willst_kostenfrei_freiheit_sklaven |
| 387 | blair - arabien - saudi - yuan - donald | 45 | 387_blair_arabien_saudi_yuan |
| 388 | virus - wcr - viren - mikroben - krankmachendes | 109 | 388_virus_wcr_viren_mikroben |
| 389 | hetzern - verharmloser - stopfen - lügnern - geschichtsausgabe | 125 | 389_hetzern_verharmloser_stopfen_lügnern |
| 390 | ärzte - impfnebenwirkungen - apothekenmitarbeiterin - mediziner - bevorstehende | 181 | 390_ärzte_impfnebenwirkungen_apothekenmitarbeiterin_mediziner |
| 391 | odysee - demotermine - gettr - youtube - spaziergänge | 44 | 391_odysee_demotermine_gettr_youtube |
| 392 | rabbit - research - folge - verlierst - enger | 40 | 392_rabbit_research_folge_verlierst |
| 393 | euro - monat - verdienen - impfzentren - impfungen | 55 | 393_euro_monat_verdienen_impfzentren |
| 394 | powerstation - stromvorrat - abrufen - jeglichen - speichern | 34 | 394_powerstation_stromvorrat_abrufen_jeglichen |
| 395 | nehammer - kärnten - wahlauftakt - övp - kanzler | 93 | 395_nehammer_kärnten_wahlauftakt_övp |
| 396 | humanus - codex - fluchtrucksacklars - gestattete - urlaubs | 52 | 396_humanus_codex_fluchtrucksacklars_gestattete |
| 397 | animal - spirit - tiere - vagina - vulva | 67 | 397_animal_spirit_tiere_vagina |
| 398 | google - here - selber - landgericht - foods | 46 | 398_google_here_selber_landgericht |
| 399 | trinkwasserqualität - leitungswasser - maximale - q10 - vitales | 50 | 399_trinkwasserqualität_leitungswasser_maximale_q10 |
| 400 | kommission - leyen - york - times - sms | 41 | 400_kommission_leyen_york_times |
| 401 | gasheizung - teekessel - lüftung - bauwagen - wetterfesten | 49 | 401_gasheizung_teekessel_lüftung_bauwagen |
| 402 | belgrad - serbien - selenskij - stampa - serbiens | 77 | 402_belgrad_serbien_selenskij_stampa |
| 403 | filterkaraffe - pelargoni - 850 - funkgerät - fc | 61 | 403_filterkaraffe_pelargoni_850_funkgerät |
| 404 | bubble - leiberl - komm - zeug - kumm | 46 | 404_bubble_leiberl_komm_zeug |
| 405 | studentenstehenauf - tiktok - schenkung - schüler - werte | 64 | 405_studentenstehenauf_tiktok_schenkung_schüler |
| 406 | kostenlawine - klimaschutz - energiearmut - erfindungen - klima | 184 | 406_kostenlawine_klimaschutz_energiearmut_erfindungen |
| 407 | medizinrecht - fachanwältin - bahner - beate - buches | 44 | 407_medizinrecht_fachanwältin_bahner_beate |
| 408 | geburtshilfe - krankenhaus - nrw - gleisdorf - 2021 | 188 | 408_geburtshilfe_krankenhaus_nrw_gleisdorf |
| 409 | guten - morgen - schönen - denkt - kissingen | 77 | 409_guten_morgen_schönen_denkt |
| 410 | buy - order - soldier - pillows - rested | 48 | 410_buy_order_soldier_pillows |
| 411 | d3 - vitamin - taktische - nervensystem - demenzentwicklung | 68 | 411_d3_vitamin_taktische_nervensystem |
| 412 | mittelerde - säulen - tv - mittelerdetv - lädchen | 71 | 412_mittelerde_säulen_tv_mittelerdetv |
| 413 | selbstschärfender - korund - mahlsteinen - getreides - feines | 33 | 413_selbstschärfender_korund_mahlsteinen_getreides |
| 414 | faktenchecker - faktenchecks - fakten - finanziert - behauptung | 90 | 414_faktenchecker_faktenchecks_fakten_finanziert |
| 415 | humor - transzendenz - meyer - entdecken - potenzial | 74 | 415_humor_transzendenz_meyer_entdecken |
| 416 | covid - 19 - impfstoffe - impfung - impfpflichtgesetz | 209 | 416_covid_19_impfstoffe_impfung |
| 417 | vital - kopp - ernährung - borax - europaweit | 65 | 417_vital_kopp_ernährung_borax |
| 418 | weihnachtsgeschäft - einzelhandel - 2g - innenstädte - hde | 45 | 418_weihnachtsgeschäft_einzelhandel_2g_innenstädte |
| 419 | bp - lebensmittelbevorratung - müsliriegel - seenotration - notverpflegung | 33 | 419_bp_lebensmittelbevorratung_müsliriegel_seenotration |
| 420 | kanada - trudeau - trucker - kanadischen - canada | 44 | 420_kanada_trudeau_trucker_kanadischen |
| 421 | demos - hälfte - existenzphilosophie - grundanliegen - jaspers | 72 | 421_demos_hälfte_existenzphilosophie_grundanliegen |
| 422 | bp - nährwerte - süß - norwegischen - schmeckt | 33 | 422_bp_nährwerte_süß_norwegischen |
| 423 | wiedergewinnung - mündigkeit - viralität - impfens - seminare | 33 | 423_wiedergewinnung_mündigkeit_viralität_impfens |
| 424 | abtreibungen - smollett - gifte - sauerkraut - abtreibung | 70 | 424_abtreibungen_smollett_gifte_sauerkraut |
| 425 | impfpflicht - geiselhaft - umfrage - beugehaft - impfprogramm | 107 | 425_impfpflicht_geiselhaft_umfrage_beugehaft |
| 426 | frankreich - impfpass - regierungsferne - cabrera - madrid | 44 | 426_frankreich_impfpass_regierungsferne_cabrera |
| 427 | edelmetalle - dir - unverbindlichen - sponsor - gold | 54 | 427_edelmetalle_dir_unverbindlichen_sponsor |
| 428 | rohkakao - nudel - gaumenfreuden - aufstriche - apfelmus | 59 | 428_rohkakao_nudel_gaumenfreuden_aufstriche |
| 429 | twitter - freecr - musk - internetanbietern - accs | 60 | 429_twitter_freecr_musk_internetanbietern |
| 430 | raketenofen - brennbaren - outdoorküche - raketenöfen - multitalent | 32 | 430_raketenofen_brennbaren_outdoorküche_raketenöfen |
| 431 | schröder - soros - guardiola - pep - versager | 143 | 431_schröder_soros_guardiola_pep |
| 432 | teiegram - rüber - aufgerollt - verspreche - gleichgeschaltet | 49 | 432_teiegram_rüber_aufgerollt_verspreche |
| 433 | selbstreinigend - alleskönner - absoluter - wasserfilter - extrem | 32 | 433_selbstreinigend_alleskönner_absoluter_wasserfilter |
| 434 | live - gunnar - lbry - youtube - streamen | 87 | 434_live_gunnar_lbry_youtube |
| 435 | dresden - berlin - münchen - gedenken - nürnberg | 151 | 435_dresden_berlin_münchen_gedenken |
| 436 | kongress - interviewgästen - servustv - rednern - hochkarätigen | 96 | 436_kongress_interviewgästen_servustv_rednern |
| 437 | mgk1q17 - webseite - ovalmedia - mutigmacher - movipo | 69 | 437_mgk1q17_webseite_ovalmedia_mutigmacher |
| 438 | rvm - coffee - discount - get - apparel | 96 | 438_rvm_coffee_discount_get |
| 439 | lichterspaziergang - autokorso - graz - bruck - gleisdorf | 116 | 439_lichterspaziergang_autokorso_graz_bruck |
| 440 | rpp - innere - präsentiert - bruin - raphael | 62 | 440_rpp_innere_präsentiert_bruin |
| 441 | sahara - staub - schwermetalle - laboranalyse - magnetisch | 47 | 441_sahara_staub_schwermetalle_laboranalyse |
| 442 | migranten - migration - mcgregor - sanft - migrationswaffe | 113 | 442_migranten_migration_mcgregor_sanft |
| 443 | vorratstonne - notieren - einlagerungsdatum - eingelagerte - erntejahr | 32 | 443_vorratstonne_notieren_einlagerungsdatum_eingelagerte |
| 444 | greetings - patriots - personal - go - my | 31 | 444_greetings_patriots_personal_go |
| 445 | taylor - musik - kaiser - gunnar - album | 49 | 445_taylor_musik_kaiser_gunnar |
| 446 | saharastaub - schwefeldioxid - staub - sahara - sand | 31 | 446_saharastaub_schwefeldioxid_staub_sahara |
| 447 | de22830654080004273567 - rechtsanwalt - spendenkonto - web - christ | 54 | 447_de22830654080004273567_rechtsanwalt_spendenkonto_web |
| 448 | sonnenblumenöl - speiseöl - aldi - kühe - flaschen | 85 | 448_sonnenblumenöl_speiseöl_aldi_kühe |
| 449 | tragegriffe - anheben - grauem - pulverbeschichtetem - versetzen | 35 | 449_tragegriffe_anheben_grauem_pulverbeschichtetem |
| 450 | wahrheit - lüge - lügen - oten - brunnen | 116 | 450_wahrheit_lüge_lügen_oten |
| 451 | versteckte - ressourcen - nutzen - erwartungen - bur | 32 | 451_versteckte_ressourcen_nutzen_erwartungen |
| 452 | geigerzähler - radioaktiver - counter - strahlung - cm | 64 | 452_geigerzähler_radioaktiver_counter_strahlung |
| 453 | jobplattform - jobsuche - füreinefreieimpfentscheidung - verlag - verstärkung | 47 | 453_jobplattform_jobsuche_füreinefreieimpfentscheidung_verlag |
| 454 | rodriguez - his - audioanalysen - ___________ - audioanalyse | 44 | 454_rodriguez_his_audioanalysen____________ |
| 455 | greetings - patriots - personal - go - my | 43 | 455_greetings_patriots_personal_go |
| 456 | funkgeräte - limitierung - verschlüsselung - abhörsicher - ausstatten | 32 | 456_funkgeräte_limitierung_verschlüsselung_abhörsicher |
| 457 | feuerstahl - gusseisentopf - braten - temperatur - passenden | 115 | 457_feuerstahl_gusseisentopf_braten_temperatur |
| 458 | swift - sanktionen - banken - russland - westlichen | 117 | 458_swift_sanktionen_banken_russland |
| 459 | rundbriefabo - zugriffe - farbige - info - optisch | 79 | 459_rundbriefabo_zugriffe_farbige_info |
| 460 | schweden - hochkorrupten - volksvermögen - krankensystem - gesundheitsvorsorge | 77 | 460_schweden_hochkorrupten_volksvermögen_krankensystem |
| 461 | dämpfung - fersenbereich - stahlkappe - außensohle - schaftabschluss | 31 | 461_dämpfung_fersenbereich_stahlkappe_außensohle |
| 462 | unbemannte - hubschrauber - luftfahrzeuge - aufklebern - su | 46 | 462_unbemannte_hubschrauber_luftfahrzeuge_aufklebern |
| 463 | dr - schäfer - rita - falko - lehmann | 30 | 463_dr_schäfer_rita_falko |
| 464 | euro - diäten - gehalt - simson - monat | 35 | 464_euro_diäten_gehalt_simson |
| 465 | amazon - inbreeding - coefficient - webshop - dynasties | 51 | 465_amazon_inbreeding_coefficient_webshop |
| 466 | rfid - geldbörsen - kredit - damen - esquire | 48 | 466_rfid_geldbörsen_kredit_damen |
| 467 | grundrechtsaktivist - di - trieste - via - con | 129 | 467_grundrechtsaktivist_di_trieste_via |
| 468 | selleriesaft - bio - gourmet - sellerie - begibt | 44 | 468_selleriesaft_bio_gourmet_sellerie |
| 469 | übersterblichkeit - mutmaßt - zurückblickt - wahnsinniger - wisnewskis | 93 | 469_übersterblichkeit_mutmaßt_zurückblickt_wahnsinniger |
| 470 | ryanair - faa - leary - jet2 - boeing | 48 | 470_ryanair_faa_leary_jet2 |
| 471 | lauterbach - bundesgesundheitsminister - karl - impfstoff - 1995 | 57 | 471_lauterbach_bundesgesundheitsminister_karl_impfstoff |
| 472 | bewusst - spielzeug - illusion - zufriedenheit - fackel | 88 | 472_bewusst_spielzeug_illusion_zufriedenheit |
| 473 | sears - jp - komiker - ausnahmslos - evtl | 65 | 473_sears_jp_komiker_ausnahmslos |
| 474 | winnetou - lausen - tom - massengeschmacks - verriss | 138 | 474_winnetou_lausen_tom_massengeschmacks |
| 475 | klarnamenpflicht - polizeigewerkschaft - fehlinformationen - tech - wendt | 100 | 475_klarnamenpflicht_polizeigewerkschaft_fehlinformationen_tech |
| 476 | selenskyj - schröder - wolodymyr - bonnell - korruption | 143 | 476_selenskyj_schröder_wolodymyr_bonnell |
| 477 | drohne - zagreb - kroatiens - luftraum - drohnen | 66 | 477_drohne_zagreb_kroatiens_luftraum |
| 478 | blutprobe - sinuswellen - wechselrichter - blutpass - normalität | 41 | 478_blutprobe_sinuswellen_wechselrichter_blutpass |
| 479 | jemen - einzelstream - josilo - dlive - roslesinforg | 59 | 479_jemen_einzelstream_josilo_dlive |
| 480 | löhnitz - steffen - wiener - 2g - hacker | 88 | 480_löhnitz_steffen_wiener_2g |
| 481 | mcdonald - adidas - ikea - wodka - filialen | 88 | 481_mcdonald_adidas_ikea_wodka |
| 482 | österreich - impfpflicht - arbeitskleidung - strafen - soll | 123 | 482_österreich_impfpflicht_arbeitskleidung_strafen |
| 483 | brille - kontaktlinsen - nerviger - nervigeres - brillenträger | 49 | 483_brille_kontaktlinsen_nerviger_nervigeres |
| 484 | china - cips - chinas - evergrande - apple | 83 | 484_china_cips_chinas_evergrande |
| 485 | matrix - film - doku - wachrütteln - sprengkraft | 45 | 485_matrix_film_doku_wachrütteln |
| 486 | gasspeicher - füllstand - gefüllt - prozent - bleschke | 36 | 486_gasspeicher_füllstand_gefüllt_prozent |
| 487 | marburg - butter - butterfass - kilner - äquatorialguinea | 50 | 487_marburg_butter_butterfass_kilner |
| 488 | adventkalender - kerzen - grüße - eisbaden - verdauungsspaziergang | 103 | 488_adventkalender_kerzen_grüße_eisbaden |
| 489 | gräftner - spiritualität - nelles - psychologie - barbara | 71 | 489_gräftner_spiritualität_nelles_psychologie |
| 490 | handy - ortung - spionage - standorts - lokalisierung | 71 | 490_handy_ortung_spionage_standorts |
| 491 | staatsrechtler - versammlungsfreiheit - versammlung - verfassungswidrig - teilnehmerzahl | 54 | 491_staatsrechtler_versammlungsfreiheit_versammlung_verfassungswidrig |
| 492 | illusionen - marineinspekteur - kaack - ausfahrt - wehrpflicht | 92 | 492_illusionen_marineinspekteur_kaack_ausfahrt |
| 493 | söder - markus - kubicki - ministerpräsident - bayerns | 35 | 493_söder_markus_kubicki_ministerpräsident |
| 494 | lenkrollen - größtmögliche - mobilität - stabile - lieferbar | 33 | 494_lenkrollen_größtmögliche_mobilität_stabile |
| 495 | geo - seminare - viralität - scenes - linksrechtsmitte | 52 | 495_geo_seminare_viralität_scenes |
| 496 | nattokinase - natto - hergestellt - heilnatura - zusatzstofffrei | 29 | 496_nattokinase_natto_hergestellt_heilnatura |
| 497 | akku - polymer - wanderns - schlaufe - aufzuhängen | 29 | 497_akku_polymer_wanderns_schlaufe |
| 498 | hierfür - petroleumheizung - nordkorea - folgende - vorteile | 29 | 498_hierfür_petroleumheizung_nordkorea_folgende |
| 499 | tschentscher - hamburg - impfstatus - bürgermeister - ungeimpften | 40 | 499_tschentscher_hamburg_impfstatus_bürgermeister |
| 500 | europaweit - klicken - versandkostenfrei - bestellen - link | 34 | 500_europaweit_klicken_versandkostenfrei_bestellen |
| 501 | migranten - informationsstelle - einwanderer - österreich - einwanderungsland | 57 | 501_migranten_informationsstelle_einwanderer_österreich |
| 502 | bgl - logistik - engelhardt - spediteure - güterkraftverkehr | 51 | 502_bgl_logistik_engelhardt_spediteure |
| 503 | youtbe - rabbit - odyssee - research - substack | 39 | 503_youtbe_rabbit_odyssee_research |
| 504 | eu - verschlüsselte - google - kommission - datenüberwachung | 37 | 504_eu_verschlüsselte_google_kommission |
| 505 | orf - beschwerde - bmi - aya - velázquez | 75 | 505_orf_beschwerde_bmi_aya |
| 506 | warentest - ffp2 - stiftung - masken - atemwiderstand | 41 | 506_warentest_ffp2_stiftung_masken |
| 507 | piks - nieswandt - fluchtrucksack - laterne - hill | 116 | 507_piks_nieswandt_fluchtrucksack_laterne |
| 508 | mindestlaufzeit - mtl - kaufverpflichtung - kündigungsfristen - platin | 49 | 508_mindestlaufzeit_mtl_kaufverpflichtung_kündigungsfristen |
| 509 | gott - uz - wonders - bibel - golubice | 81 | 509_gott_uz_wonders_bibel |
| 510 | asta - journalist - polizei - gez - studentenausschuss | 65 | 510_asta_journalist_polizei_gez |
| 511 | fabianer - kickl - herbert - nehammer - partei | 166 | 511_fabianer_kickl_herbert_nehammer |
| 512 | katastrophen - überleben - handbuch - zirbenkissen - signalpfeife | 83 | 512_katastrophen_überleben_handbuch_zirbenkissen |
| 513 | jaspers - schweden - dänemark - brandenburg - übertechnisierung | 44 | 513_jaspers_schweden_dänemark_brandenburg |
| 514 | feuerwehr - flutkatastrophe - neuseeland - notstand - gabrielle | 89 | 514_feuerwehr_flutkatastrophe_neuseeland_notstand |
| 515 | werbematerial - 0028 - zusendungen - 1037 - s52 | 53 | 515_werbematerial_0028_zusendungen_1037 |
| 516 | kontaktbeschränkungen - geimpfte - genesene - zusammenkünfte - tschentscher | 40 | 516_kontaktbeschränkungen_geimpfte_genesene_zusammenkünfte |
| 517 | heizung - extra - petroleum - flammlöschautomatik - petroleumbetriebenen | 29 | 517_heizung_extra_petroleum_flammlöschautomatik |
| 518 | warnstufe - pisten - alpen - lawinen - bergsportler | 63 | 518_warnstufe_pisten_alpen_lawinen |
| 519 | elektroschocker - lady - power - energetischen - sturmlaterne | 40 | 519_elektroschocker_lady_power_energetischen |
| 520 | 4200 - keramik - aussenbereiche - belüftete - wettergeschützte | 38 | 520_4200_keramik_aussenbereiche_belüftete |
| 521 | 4970 - at82 - 1843 - 4500 - gibaatwwxxx | 46 | 521_4970_at82_1843_4500 |
| 522 | funkgeräte - limitierung - verschlüsselung - abhörsicher - ausstatten | 35 | 522_funkgeräte_limitierung_verschlüsselung_abhörsicher |
| 523 | widerstand - zuviele - freiheit - demo - besinnt | 146 | 523_widerstand_zuviele_freiheit_demo |
| 524 | hervorzuheben - geräuschlose - jeglichen - innenräumen - profi | 28 | 524_hervorzuheben_geräuschlose_jeglichen_innenräumen |
| 525 | ausländer - jemanden - savior - propagandisten - zentralisierte | 198 | 525_ausländer_jemanden_savior_propagandisten |
| 526 | uttley - smokie - terry - abend - wunderschönen | 136 | 526_uttley_smokie_terry_abend |
| 527 | schwab - klaus - harvard - cia - wef | 39 | 527_schwab_klaus_harvard_cia |
| 528 | stew - content - advertise - episodes - shedding | 28 | 528_stew_content_advertise_episodes |
| 529 | impfpflicht - fdp - kubicki - schiessler - mückstein | 150 | 529_impfpflicht_fdp_kubicki_schiessler |
| 530 | immunsystem - funkgerät - stärken - gesundheit - abhörsicher | 46 | 530_immunsystem_funkgerät_stärken_gesundheit |
| 531 | waldhäusl - wien - gottfried - asyl - landesrat | 55 | 531_waldhäusl_wien_gottfried_asyl |
| 532 | maitrunk - 4502 - fidor - spendenmöglichkeit - de95 | 35 | 532_maitrunk_4502_fidor_spendenmöglichkeit |
| 533 | straßen - straße - raus - kritischemasse - freiberg | 67 | 533_straßen_straße_raus_kritischemasse |
| 534 | nato - weltkrieg - krieg - ukraine - dritten | 293 | 534_nato_weltkrieg_krieg_ukraine |
| 535 | peters - by - exposed - cannot - truth | 106 | 535_peters_by_exposed_cannot |
| 536 | innenfach - abnehmbare - hüfttasche - gepolsterter - umhängetasche | 28 | 536_innenfach_abnehmbare_hüfttasche_gepolsterter |
| 537 | captain - hanni - bründel - future - lanz | 57 | 537_captain_hanni_bründel_future |
| 538 | baic - lieferketten - kohlekrise - exportüberschuss - prozent | 92 | 538_baic_lieferketten_kohlekrise_exportüberschuss |
| 539 | trocknen100 - nüssen - rezepte - dörren - obst | 37 | 539_trocknen100_nüssen_rezepte_dörren |
| 540 | usa - biden - massenvernichtungswaffen - plan - westen | 135 | 540_usa_biden_massenvernichtungswaffen_plan |
| 541 | weishaupt - schlagstöcke - pfefferspray - pürstl - straße | 208 | 541_weishaupt_schlagstöcke_pfefferspray_pürstl |
| 542 | sanktionen - westen - unvermeidlich - russland - kanals | 175 | 542_sanktionen_westen_unvermeidlich_russland |
| 543 | wehrpflicht - bundeswehr - soldaten - amtshilfe - högl | 132 | 543_wehrpflicht_bundeswehr_soldaten_amtshilfe |
| 544 | myokarditis - moderna - impfung - studie - mrna | 90 | 544_myokarditis_moderna_impfung_studie |
| 545 | 00 - rathaus - 18 - weitaus - marktplatz | 73 | 545_00_rathaus_18_weitaus |
| 546 | rentner - rente - 2030 - cum - schlegel | 71 | 546_rentner_rente_2030_cum |
| 547 | erbil - iran - raketen - afghanen - abgefeuert | 61 | 547_erbil_iran_raketen_afghanen |
| 548 | instrumenten - safety - op - mitfahrgelegenheit - liberty | 39 | 548_instrumenten_safety_op_mitfahrgelegenheit |
| 549 | chemikalien - pfas - umwelt - tabletten - esbit | 81 | 549_chemikalien_pfas_umwelt_tabletten |
| 550 | ffp2 - maskenpflicht - innenräumen - aufgehoben - pcr | 68 | 550_ffp2_maskenpflicht_innenräumen_aufgehoben |
| 551 | lauterbachruecktrittsofort - honkhonk - truckersconvoy2022 - defaults - truckersforfreedom2022 | 66 | 551_lauterbachruecktrittsofort_honkhonk_truckersconvoy2022_defaults |
| 552 | blasendurchbruch - rausausderblase - medienallianz - unheilige - blase | 34 | 552_blasendurchbruch_rausausderblase_medienallianz_unheilige |
| 553 | rtl - vermögensregister - tennet - eu - gruner | 102 | 553_rtl_vermögensregister_tennet_eu |
| 554 | sönnichsen - atzorn - andreas - arzt - postvac | 149 | 554_sönnichsen_atzorn_andreas_arzt |
| 555 | my - germany - from - he - greetings | 49 | 555_my_germany_from_he |
| 556 | platzbedarf - geringe - zubereitung - vorteil - lagerung | 27 | 556_platzbedarf_geringe_zubereitung_vorteil |
| 557 | gunnarkaiser - operette - favorit - tenor - philosophie | 58 | 557_gunnarkaiser_operette_favorit_tenor |
| 558 | de22830654080004273567 - mediakanälen - spendenkonto - lügen - compact | 36 | 558_de22830654080004273567_mediakanälen_spendenkonto_lügen |
| 559 | wissenschaftsforscher - scheingast - frauenkollektiv - rednerinnen - weish | 27 | 559_wissenschaftsforscher_scheingast_frauenkollektiv_rednerinnen |
| 560 | bunker - geheimarmeen - afghanistan - nato - cia | 242 | 560_bunker_geheimarmeen_afghanistan_nato |
| 561 | verordnungen - verordnung - lockdown - maßnahmen - lockdowns | 205 | 561_verordnungen_verordnung_lockdown_maßnahmen |
| 562 | ärztekammer - szekeres - ärzte - wohlfahrtsfonds - mitglieder | 105 | 562_ärztekammer_szekeres_ärzte_wohlfahrtsfonds |
| 563 | 3g - 2g - kontrollen - regel - nonfood | 45 | 563_3g_2g_kontrollen_regel |
| 564 | davos - dsds - 1958 - humphrey - chat | 196 | 564_davos_dsds_1958_humphrey |
| 565 | spaziergang - angemeldete - montag - spazieren - montagsspaziergang | 74 | 565_spaziergang_angemeldete_montag_spazieren |
| 566 | pfizer - injektion - jähriges - thailändische - prinzessin | 51 | 566_pfizer_injektion_jähriges_thailändische |
| 567 | our - patriot - most - gourmet - sleepy | 41 | 567_our_patriot_most_gourmet |
| 568 | medien - russische - propaganda - basieren - russland | 195 | 568_medien_russische_propaganda_basieren |
| 569 | dlive - pour - classic - kaffee - stanley | 56 | 569_dlive_pour_classic_kaffee |
| 570 | greetings - patriots - personal - go - my | 27 | 570_greetings_patriots_personal_go |
| 571 | kpa - konfektioniert - pfrontner - geschnitten - allgäuer | 46 | 571_kpa_konfektioniert_pfrontner_geschnitten |
| 572 | paypal - manumittas - mittas - odysseylbry - videokanäle | 33 | 572_paypal_manumittas_mittas_odysseylbry |
| 573 | 36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn - normalen - calciumspiegel - götterwelt - immunsystems | 37 | 573_36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn_normalen_calciumspiegel_götterwelt |
| 574 | kabellos - integriertes - digitalkamera - spritzwassergeschütztes - tablet | 35 | 574_kabellos_integriertes_digitalkamera_spritzwassergeschütztes |
| 575 | kubicki - lausen - vergeltung - rache - jünger | 64 | 575_kubicki_lausen_vergeltung_rache |
| 576 | lagern - 10er - notwasserbeutel - katastrophenfälle - wasserbeutel | 26 | 576_lagern_10er_notwasserbeutel_katastrophenfälle |
| 577 | symptome - krankenhaus - binger - müdigkeit - felicia | 92 | 577_symptome_krankenhaus_binger_müdigkeit |
| 578 | kettle - kelly - sturmkanne - edelstahl - original | 26 | 578_kettle_kelly_sturmkanne_edelstahl |
| 579 | palmer - ausnahmezustand - lauterbach - karl - beugehaft | 72 | 579_palmer_ausnahmezustand_lauterbach_karl |
| 580 | getreidetonne - lebensmitteln - unpraktisch - vorratshaltung - säcken | 67 | 580_getreidetonne_lebensmitteln_unpraktisch_vorratshaltung |
| 581 | vordenken - mitdenken - nachdenken - folge - sehne | 55 | 581_vordenken_mitdenken_nachdenken_folge |
| 582 | mace - twitter - gadde - nancy - vijaya | 57 | 582_mace_twitter_gadde_nancy |
| 583 | 02 - schellingstrasse - leopoldstrasse - lk - 2023 | 69 | 583_02_schellingstrasse_leopoldstrasse_lk |
| 584 | frankfurt - aufzug - taunusanlage - innenstadt - frankfurts | 37 | 584_frankfurt_aufzug_taunusanlage_innenstadt |
| 585 | odysseylbry - videokanäle - 1031 - bawaatwwxxx - at29 | 38 | 585_odysseylbry_videokanäle_1031_bawaatwwxxx |
| 586 | flugverbotszone - luftraum - flugzeug - nato - flugzeuge | 175 | 586_flugverbotszone_luftraum_flugzeug_nato |
| 587 | überschwemmungen - sydney - australien - australiens - häuser | 32 | 587_überschwemmungen_sydney_australien_australiens |
| 588 | datei - stabildurchdenwandel - audio - download - 03 | 26 | 588_datei_stabildurchdenwandel_audio_download |
| 589 | qfs - quantum - system - gcr - ust | 39 | 589_qfs_quantum_system_gcr |
| 590 | zeitzeugen - band - corona_fakten - ansprechpartner - leise | 69 | 590_zeitzeugen_band_corona_fakten_ansprechpartner |
| 591 | löwenzahn - biologischem - broendegaarden - löwenzahnfelder - extrakts | 33 | 591_löwenzahn_biologischem_broendegaarden_löwenzahnfelder |
| 592 | krieg - russland - nato - deutschland - putin | 182 | 592_krieg_russland_nato_deutschland |
| 593 | grüne - lärchedie - säuerlich - schösslinge - neigungsgruppe | 34 | 593_grüne_lärchedie_säuerlich_schösslinge |
| 594 | stirnlampe - lampe - montagearbeiten - notfallsituation - hände | 37 | 594_stirnlampe_lampe_montagearbeiten_notfallsituation |
| 595 | strom - energiewende - stromausfall - blackout - wärmepumpen | 119 | 595_strom_energiewende_stromausfall_blackout |
| 596 | plan - kommst - lernst - sammlung - q74you | 28 | 596_plan_kommst_lernst_sammlung |
| 597 | eu - europäischen - ahu - unversehrtheit - szijjártó | 119 | 597_eu_europäischen_ahu_unversehrtheit |
| 598 | vpn - leiberl - zeug - wiens - löst | 58 | 598_vpn_leiberl_zeug_wiens |
| 599 | balaton - gemeinschaft - deutschsprachigen - étterem - bearth | 60 | 599_balaton_gemeinschaft_deutschsprachigen_étterem |
| 600 | anderson - mdep - christine - strafrechtlichen - bundesvorstand | 49 | 600_anderson_mdep_christine_strafrechtlichen |
| 601 | micro - honorieren - kurbelradio - pfefferspraypistole - notration | 29 | 601_micro_honorieren_kurbelradio_pfefferspraypistole |
| 602 | at2s - 3506 - 0006 - rvsa - 1309 | 59 | 602_at2s_3506_0006_rvsa |
| 603 | hierfür - petroleumheizung - folgende - vorteile - alternative | 26 | 603_hierfür_petroleumheizung_folgende_vorteile |
| 604 | krankenschwester - schwester - station - hospital - jasmin | 53 | 604_krankenschwester_schwester_station_hospital |
| 605 | thule - lügenpresse - survivaldecke - rettungsdecke - panzerplatten | 89 | 605_thule_lügenpresse_survivaldecke_rettungsdecke |
| 606 | lampe - usb - mini - buchse - ladequelle | 35 | 606_lampe_usb_mini_buchse |
| 607 | stapelbaren - standbodenbeutel - wiederverschließbaren - hauptmahlzeiten - leicht | 37 | 607_stapelbaren_standbodenbeutel_wiederverschließbaren_hauptmahlzeiten |
| 608 | hauptadresse - cdl - bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6 - genodef1m03 - de34 | 51 | 608_hauptadresse_cdl_bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6_genodef1m03 |
| 609 | cern - restricted - republic - vereinten - globalisten | 158 | 609_cern_restricted_republic_vereinten |
| 610 | alternativmedien - skandal - leyen - glück - sperrbezirk | 36 | 610_alternativmedien_skandal_leyen_glück |
| 611 | kekulé - halle - virologen - universität - dienstenthebung | 46 | 611_kekulé_halle_virologen_universität |
| 612 | stew - content - episodes - shedding - treatment | 35 | 612_stew_content_episodes_shedding |
| 613 | klimabonus - asylwerber - asylanten - bevölkerungsaustausch - experimente | 104 | 613_klimabonus_asylwerber_asylanten_bevölkerungsaustausch |
| 614 | crowdbunker - leistungsstarke - kichererbsen - bohnen - leitungswasser | 57 | 614_crowdbunker_leistungsstarke_kichererbsen_bohnen |
| 615 | salzburg - protestmarsch - mozartplatz - 12 - livestreams | 55 | 615_salzburg_protestmarsch_mozartplatz_12 |
| 616 | lampenöl - autark - petroleumlampen - reinheit - brennstoff | 25 | 616_lampenöl_autark_petroleumlampen_reinheit |
| 617 | blackout - stromausfall - blackouts - saurugg - stromnetz | 83 | 617_blackout_stromausfall_blackouts_saurugg |
| 618 | wasserkocher - nassem - brennbarem - windigstem - rekordzeit | 25 | 618_wasserkocher_nassem_brennbarem_windigstem |
| 619 | rt - verwaltungsstrafe - bundesländern - sanktionsmaßnahmen - demokratieabgabe | 31 | 619_rt_verwaltungsstrafe_bundesländern_sanktionsmaßnahmen |
| 620 | verbrenner - 2035 - eu - parlament - abgasnorm | 87 | 620_verbrenner_2035_eu_parlament |
| 621 | schlafsack - thermolite - schlafsackinneren - geringem - isolierung | 25 | 621_schlafsack_thermolite_schlafsackinneren_geringem |
| 622 | ibrahim - brokstedt - amri - regionalzug - anis | 51 | 622_ibrahim_brokstedt_amri_regionalzug |
| 623 | rechtsextremismus - matter - lives - nancy - black | 88 | 623_rechtsextremismus_matter_lives_nancy |
| 624 | anne - foryoupage - foryou - fürdich - fy | 158 | 624_anne_foryoupage_foryou_fürdich |
| 625 | impfschäden - daten - fälle - impfquote - ema | 130 | 625_impfschäden_daten_fälle_impfquote |
| 626 | trump - friedman - stratfor - putin - iranischen | 97 | 626_trump_friedman_stratfor_putin |
| 627 | katastrophenforscher - goersch - akzep - angesicht - finanzberichten | 79 | 627_katastrophenforscher_goersch_akzep_angesicht |
| 628 | gusseisen - willhaben - bechern - oven - dutch | 112 | 628_gusseisen_willhaben_bechern_oven |
| 629 | chatgpt - gedicht - software - ki - lobendes | 48 | 629_chatgpt_gedicht_software_ki |
| 630 | stiefel - bietet - sicherheitsstiefel - innenschuh - obermaterial | 41 | 630_stiefel_bietet_sicherheitsstiefel_innenschuh |
| 631 | heimatkurier - zensursicheren - förderer - kurier - heimat | 66 | 631_heimatkurier_zensursicheren_förderer_kurier |
| 632 | tönnies - landwirte - fleisch - lebensmitteln - fleischbranche | 85 | 632_tönnies_landwirte_fleisch_lebensmitteln |
| 633 | einsatzstiefel - squad - inch - stiefel - halbhoher | 24 | 633_einsatzstiefel_squad_inch_stiefel |
| 634 | interview - führich - thurner - website - catherine | 185 | 634_interview_führich_thurner_website |
| 635 | wildberg - euro - duschgel - 0815 - kaufst | 32 | 635_wildberg_euro_duschgel_0815 |
| 636 | corona - striedinger - gecko - bmi - expertenrat | 150 | 636_corona_striedinger_gecko_bmi |
| 637 | fermentation - entweichen - luftdichten - selbsteingelegten - gärventil | 50 | 637_fermentation_entweichen_luftdichten_selbsteingelegten |
| 638 | polen - warschau - strack - zimmermann - antrittsbesuch | 64 | 638_polen_warschau_strack_zimmermann |
| 639 | kochen - windigstem - nassem - rekordzeit - kanne | 24 | 639_kochen_windigstem_nassem_rekordzeit |
| 640 | id - austria - digitale - verknüpfung - nutzungsmöglichkeiten | 25 | 640_id_austria_digitale_verknüpfung |
| 641 | dänemark - 83 - geboostert - omikron - 785 | 27 | 641_dänemark_83_geboostert_omikron |
| 642 | preise - discounter - teurer - supermarkt - logistikkosten | 56 | 642_preise_discounter_teurer_supermarkt |
| 643 | schweizer - wolldecke - armee - nattokinase - wasserfilter | 55 | 643_schweizer_wolldecke_armee_nattokinase |
| 644 | mikrochips - chinesische - corp - unternehmen - koreanische | 27 | 644_mikrochips_chinesische_corp_unternehmen |
| 645 | laptop - donbass - gefunden - militärbase - verlassenen | 28 | 645_laptop_donbass_gefunden_militärbase |
| 646 | sanktionen - brasilien - unwälzung - kommmt - lossagen | 109 | 646_sanktionen_brasilien_unwälzung_kommmt |
| 647 | handschuhe - aramid - handrückenbereich - handfläche - leder | 29 | 647_handschuhe_aramid_handrückenbereich_handfläche |
| 648 | berichterstattung - ard - cnn - zdf - bbc | 56 | 648_berichterstattung_ard_cnn_zdf |
| 649 | webkante - schafwolle - farbstreifen - originalvorgaben - fertigt | 38 | 649_webkante_schafwolle_farbstreifen_originalvorgaben |
| 650 | eu - ukraine - beitritt - scholz - mitgliedschaft | 186 | 650_eu_ukraine_beitritt_scholz |
| 651 | abonnieren - nochmals - sie - deswegen - mitglied | 23 | 651_abonnieren_nochmals_sie_deswegen |
| 652 | stewpeters10 - your - kryptonite - destress - purchasing | 23 | 652_stewpeters10_your_kryptonite_destress |
| 653 | codex - humanus - alzheimer - wehren - helli | 77 | 653_codex_humanus_alzheimer_wehren |
| 654 | kiew - ukrainischen - donbass - ukrainische - zivilisten | 269 | 654_kiew_ukrainischen_donbass_ukrainische |
| 655 | passierscheine - ausgangssperre - warnstreiktag - pflegepersonals - ringstraße | 49 | 655_passierscheine_ausgangssperre_warnstreiktag_pflegepersonals |
| 656 | schöning - heiko - verbrechen - seilschaften - enthüllungsbuch | 23 | 656_schöning_heiko_verbrechen_seilschaften |
| 657 | kastenform - poncho - petromax - bivy - ultralite | 43 | 657_kastenform_poncho_petromax_bivy |
| 658 | russland - usa - europarat - westen - russische | 357 | 658_russland_usa_europarat_westen |
| 659 | blackrock - benett - vermögensverwalter - glänzt - honecker | 31 | 659_blackrock_benett_vermögensverwalter_glänzt |
| 660 | regierung - sperrung - bundesregierung - neujahrsruhe - lockdown | 145 | 660_regierung_sperrung_bundesregierung_neujahrsruhe |
| 661 | lautstärke - stuht - spreely - puresocialnetwork - pinterest | 50 | 661_lautstärke_stuht_spreely_puresocialnetwork |
| 662 | janich - ballweg - haft - remo - pianist | 118 | 662_janich_ballweg_haft_remo |
| 663 | wisnewski - negativbewertungen - med - german - attacken | 177 | 663_wisnewski_negativbewertungen_med_german |
| 664 | thurner - grammy - catherine - hollywood - ahnen | 123 | 664_thurner_grammy_catherine_hollywood |
| 665 | eier - haltbar - sprühgetrocknet - bodenhaltung - hühnereiern | 33 | 665_eier_haltbar_sprühgetrocknet_bodenhaltung |
| 666 | idealism - prevails - reformationszeit - wunder - paradies | 32 | 666_idealism_prevails_reformationszeit_wunder |
| 667 | nontschew - bowl - mirco - kaufman - obduktion | 91 | 667_nontschew_bowl_mirco_kaufman |
| 668 | zirkusaffe - er - hund - endet - schwurbel | 222 | 668_zirkusaffe_er_hund_endet |
| 669 | miriam - hope - wohnzimmertalk - sendung - 5777 | 127 | 669_miriam_hope_wohnzimmertalk_sendung |
| 670 | einlagern - co2 - liter - set - beuteln | 67 | 670_einlagern_co2_liter_set |
| 671 | teamheimat - team - squad - montagsdemos - überschnitt | 36 | 671_teamheimat_team_squad_montagsdemos |
| 672 | geschützt - niedrigstand - lagerbestand - eco - begrenzter | 36 | 672_geschützt_niedrigstand_lagerbestand_eco |
| 673 | gedichte - adventskalender - geschichten - kleinentürchen - grzywa | 24 | 673_gedichte_adventskalender_geschichten_kleinentürchen |
| 674 | gierhake - entschuldigung - artikel - kollektivismus - individualismus | 156 | 674_gierhake_entschuldigung_artikel_kollektivismus |
| 675 | pei - bkk - unfälle - otte - kandidat | 38 | 675_pei_bkk_unfälle_otte |
| 676 | pizzagate - bestsellerreihe - erfolgsautor - jahresrückblicke - predictive | 92 | 676_pizzagate_bestsellerreihe_erfolgsautor_jahresrückblicke |
| 677 | chenoweth - iqm - sharp - gewaltfreie - bewegungen | 49 | 677_chenoweth_iqm_sharp_gewaltfreie |
| 678 | ordnungsamt - restaurantleiter - restaurant - 2g - mitarbeitenden | 34 | 678_ordnungsamt_restaurantleiter_restaurant_2g |
| 679 | gasheizung - mobile - heater - mr - ölpumpen | 83 | 679_gasheizung_mobile_heater_mr |
| 680 | lira - türkei - erdogan - erdoğan - türkische | 70 | 680_lira_türkei_erdogan_erdoğan |
| 681 | feindbild - marschiert - putin - verhasst - taiwan | 121 | 681_feindbild_marschiert_putin_verhasst |
| 682 | nato - stoltenberg - atomwaffen - waffen - ukraine | 269 | 682_nato_stoltenberg_atomwaffen_waffen |
| 683 | hildegard - jva - pervitin - bingen - menschenkette | 34 | 683_hildegard_jva_pervitin_bingen |
| 684 | obdachlose - bahnsteigen - platt - obdachlosen - 3g | 38 | 684_obdachlose_bahnsteigen_platt_obdachlosen |
| 685 | kapitalismus - marktwirtschaft - klimaterroristen - umsatzeinbrüche - verzeichnen | 82 | 685_kapitalismus_marktwirtschaft_klimaterroristen_umsatzeinbrüche |
| 686 | herman - popp - liebe - leserin - zuschrift | 144 | 686_herman_popp_liebe_leserin |
| 687 | hartkekse - epas - trekkingbereich - wassergehalt - tagesration | 23 | 687_hartkekse_epas_trekkingbereich_wassergehalt |
| 688 | lisa - fitz - satire - seelenkräfte - swr | 56 | 688_lisa_fitz_satire_seelenkräfte |
| 689 | webinar - stattfindet - kollateral - hiermit - kopfschmerzen | 61 | 689_webinar_stattfindet_kollateral_hiermit |
| 690 | icke - irlmaier - david - love - shariraye | 51 | 690_icke_irlmaier_david_love |
| 691 | verbinde - punkte - bernhard - riegler - auf1 | 98 | 691_verbinde_punkte_bernhard_riegler |
| 692 | artenschutz - offshore - windenergie - infraschall - windkraftanlagen | 52 | 692_artenschutz_offshore_windenergie_infraschall |
| 693 | taiwan - china - taiwans - chinesischen - chinesen | 69 | 693_taiwan_china_taiwans_chinesischen |
| 694 | defcon - pope - stufe - steps - death | 22 | 694_defcon_pope_stufe_steps |
| 695 | ulrike - guérot - guerot - schweigt - aussperren | 65 | 695_ulrike_guérot_guerot_schweigt |
| 696 | 1at - mitreden - fairdenker - fairdenken - parteilos | 79 | 696_1at_mitreden_fairdenker_fairdenken |
| 697 | atomkraftwerk - odessa - akw - saporischschja - ukrenerho | 70 | 697_atomkraftwerk_odessa_akw_saporischschja |
| 698 | bevorratung - expeditionsbereich - speziellen - katastrophenschutz - bewährt | 22 | 698_bevorratung_expeditionsbereich_speziellen_katastrophenschutz |
| 699 | wasserfilter - guardian - modernste - purifier - preisgekrönte | 22 | 699_wasserfilter_guardian_modernste_purifier |
| 700 | cibis - liefersituation - landgrebe - gleichwertigen - abweichen | 40 | 700_cibis_liefersituation_landgrebe_gleichwertigen |
| 701 | matthie - angesagteste - carolinmatthie - schickeria - twitch | 27 | 701_matthie_angesagteste_carolinmatthie_schickeria |
| 702 | kaniber - brot - brotbackautomat - bauern - mindesthaltbarkeit | 45 | 702_kaniber_brot_brotbackautomat_bauern |
| 703 | memo - xlm - doge - xrp - f36amywfs2n6sxxwfmzpgz5vs2gnbrtlajxvdzepnvrif4c56r1k2pfgevvfffbztpn | 22 | 703_memo_xlm_doge_xrp |
| 704 | taiwan - china - südchinesischen - philippinen - altmeister | 44 | 704_taiwan_china_südchinesischen_philippinen |
| 705 | hierfür - petroleumheizung - flyer - folgende - vorteile | 42 | 705_hierfür_petroleumheizung_flyer_folgende |
| 706 | schwab - ki - klaus - altman - inklusiv | 44 | 706_schwab_ki_klaus_altman |
| 707 | vollmilchpulver - basics - bio - ef - grundnahrungsmitteln | 22 | 707_vollmilchpulver_basics_bio_ef |
| 708 | aufklärungsvideos - hilfreiche - jeglicher - frei - zensur | 31 | 708_aufklärungsvideos_hilfreiche_jeglicher_frei |
| 709 | lieferumfang - enthalten - pulverlackiertes - zusatzheizer - handelsübliche | 22 | 709_lieferumfang_enthalten_pulverlackiertes_zusatzheizer |
| 710 | frauen - untergebracht - männern - asylheimen - menschenhandel | 59 | 710_frauen_untergebracht_männern_asylheimen |
| 711 | windräder - vorrichtung - 2030 - windkraft - ausbau | 37 | 711_windräder_vorrichtung_2030_windkraft |
| 712 | einsiedel - asylheim - peutenhausen - eier - volleipulver | 52 | 712_einsiedel_asylheim_peutenhausen_eier |
| 713 | hotel - hotels - rumänien - impfpflichtgesetzes - lockdown | 55 | 713_hotel_hotels_rumänien_impfpflichtgesetzes |
| 714 | hellsten - bernays - köpfe - automatismen - gesellschaftsbild | 193 | 714_hellsten_bernays_köpfe_automatismen |
| 715 | kohn - rechtsstaat - pen - rstp - zemmour | 61 | 715_kohn_rechtsstaat_pen_rstp |
| 716 | dummheit - drew - blöd - thegreatregret - regret | 191 | 716_dummheit_drew_blöd_thegreatregret |
| 717 | hymne - lebe - denkt - selbstfaktenfriedenfreiheit - morgen | 203 | 717_hymne_lebe_denkt_selbstfaktenfriedenfreiheit |
| 718 | mfg - pagitz - bundesparteiobmann - landessprecher - dr | 57 | 718_mfg_pagitz_bundesparteiobmann_landessprecher |
| 719 | mfg - bundesvorstand - leasingraten - leasingverhältnisse - swiftverfahren | 29 | 719_mfg_bundesvorstand_leasingraten_leasingverhältnisse |
| 720 | stiefel - vorteil - rundsendung - sepp - forcher | 36 | 720_stiefel_vorteil_rundsendung_sepp |
| 721 | advent - weihnachtszeit - kunst - baum - weihnacht | 125 | 721_advent_weihnachtszeit_kunst_baum |
| 722 | florida - desantis - ron - theater - gouverneur | 65 | 722_florida_desantis_ron_theater |
| 723 | solltest - nebenwirkung - streams - interessieren - impfungen | 28 | 723_solltest_nebenwirkung_streams_interessieren |
| 724 | sparkassen - volksbanken - bank - banken - kunden | 67 | 724_sparkassen_volksbanken_bank_banken |
| 725 | feb - october - day - ratten - march | 96 | 725_feb_october_day_ratten |
| 726 | italien - meloni - draghi - regionalwahlen - giorgia | 50 | 726_italien_meloni_draghi_regionalwahlen |
| 727 | gewalt - ruppert - franz - psychologie - dynamiken | 43 | 727_gewalt_ruppert_franz_psychologie |
| 728 | iran - pakistan - cia - nuklearen - corporation | 128 | 728_iran_pakistan_cia_nuklearen |
| 729 | facebook - würrer - bildquelle - thumbnail - telegram | 53 | 729_facebook_würrer_bildquelle_thumbnail |
| 730 | kardinal - müller - kaserne - überwachungsstaat - ludwig | 125 | 730_kardinal_müller_kaserne_überwachungsstaat |
| 731 | lauterbach - fdp - corona - karl - maßnahmen | 180 | 731_lauterbach_fdp_corona_karl |
| 732 | reformation - siga - rebell - videokanal - nachrichtenkanal | 92 | 732_reformation_siga_rebell_videokanal |
| 733 | schubert - livestream - gerd - psychotherapie - burnout | 108 | 733_schubert_livestream_gerd_psychotherapie |
| 734 | kinderpornos - levine - kinderpornografie - meek - records | 44 | 734_kinderpornos_levine_kinderpornografie_meek |
| 735 | fuellmich - reiner - wohnräume - haintz_ - behandlungsforscher | 52 | 735_fuellmich_reiner_wohnräume_haintz_ |
| 736 | löwenmamas - erscheinender - drohszenarien - kopflosem - unmündigen | 57 | 736_löwenmamas_erscheinender_drohszenarien_kopflosem |
| 737 | gasheizofen - wettergeschützten - belüfteten - innenbereich - außenbereich | 35 | 737_gasheizofen_wettergeschützten_belüfteten_innenbereich |
| 738 | zahnpulver - calcium - lavera - zähne - birkengold | 38 | 738_zahnpulver_calcium_lavera_zähne |
| 739 | salman - saudi - mohammed - schwert - trump | 36 | 739_salman_saudi_mohammed_schwert |
| 740 | wolff - komplex - förderer - digital - zusammenballung | 45 | 740_wolff_komplex_förderer_digital |
| 741 | impfpflicht - konferenz - impfnebenwirkungen - impfstoffe - unerwartet | 304 | 741_impfpflicht_konferenz_impfnebenwirkungen_impfstoffe |
| 742 | ngos - poppel - patrick - doctor - chronik | 118 | 742_ngos_poppel_patrick_doctor |
| 743 | agamben - giorgio - nachwort - essayband - sodenkamp | 31 | 743_agamben_giorgio_nachwort_essayband |
| 744 | raphael - bonelli - bauchgefühle - nützen - entstehen | 55 | 744_raphael_bonelli_bauchgefühle_nützen |
| 745 | wien - strasse - northeim - visitenkarten - aschbach | 66 | 745_wien_strasse_northeim_visitenkarten |
| 746 | wien - zeichen - friedliches - warnstreik - lautes | 110 | 746_wien_zeichen_friedliches_warnstreik |
| 747 | mitgründer - schriftlich - umfangreiche - analyst - organisierten | 21 | 747_mitgründer_schriftlich_umfangreiche_analyst |
| 748 | veritas - project - rosenberg - medienberichterstattung - york | 49 | 748_veritas_project_rosenberg_medienberichterstattung |
| 749 | musik - doorjammer - podcast - deezer - strauss | 63 | 749_musik_doorjammer_podcast_deezer |
| 750 | minderheit - rechtsextremismus - gesellschaft - unbeteiligtem - hass | 155 | 750_minderheit_rechtsextremismus_gesellschaft_unbeteiligtem |
| 751 | senat - 2g - plus - mg - lüneburg | 85 | 751_senat_2g_plus_mg |
| 752 | bundespressekonferenz - bußgeld - verhaftet - ausschluss - pressefreiheit | 41 | 752_bundespressekonferenz_bußgeld_verhaftet_ausschluss |
| 753 | sönnichsen - freigesprochen - prof - amtsanmaßung - anklagepunkten | 31 | 753_sönnichsen_freigesprochen_prof_amtsanmaßung |
| 754 | copyright - use - fair - otherwise - materials | 49 | 754_copyright_use_fair_otherwise |
| 755 | florian - karlsruhe - justizanstalt - harbarth - zkm | 22 | 755_florian_karlsruhe_justizanstalt_harbarth |
| 756 | demos - plattform - fgh - respekt - concoy | 33 | 756_demos_plattform_fgh_respekt |
| 757 | puresocialnetwork - spreely - pinterest - mewe - parler | 21 | 757_puresocialnetwork_spreely_pinterest_mewe |
| 758 | logo - magdeburg - mohrenbrauerei - weiz - mohrenbräu | 36 | 758_logo_magdeburg_mohrenbrauerei_weiz |
| 759 | per - banküberweisung - vollstahlaxt - at483500000000163378 - rvsaat2s | 56 | 759_per_banküberweisung_vollstahlaxt_at483500000000163378 |
| 760 | gauck - lebensglück - lebensfreude - frieren - bundespräsident | 32 | 760_gauck_lebensglück_lebensfreude_frieren |
| 761 | prof - dr - med - doctor - gunnar | 35 | 761_prof_dr_med_doctor |
| 762 | wolff - holter - vivoterra - ernst - schatzkammer | 58 | 762_wolff_holter_vivoterra_ernst |
| 763 | dynamo - kurbel - powerstation - stromgenerierung - ausflüge | 21 | 763_dynamo_kurbel_powerstation_stromgenerierung |
| 764 | perspektiven - spazierten - veranstalteten - arge - bundeskazleramt | 49 | 764_perspektiven_spazierten_veranstalteten_arge |
| 765 | video - impfgeschädigten - bhakdi - schädigungen - impfgeschädigte | 43 | 765_video_impfgeschädigten_bhakdi_schädigungen |
| 766 | stew - content - advertise - episodes - ilverfahren | 38 | 766_stew_content_advertise_episodes |
| 767 | erholsameren - schnarchen - herzfrequenz - gleichmäßige - abzubauen | 67 | 767_erholsameren_schnarchen_herzfrequenz_gleichmäßige |
| 768 | band - kapitel - schwindel - dan - lula | 172 | 768_band_kapitel_schwindel_dan |
| 769 | eier - geschriebenes - rührei - backzutat - omelette | 42 | 769_eier_geschriebenes_rührei_backzutat |
| 770 | salzkristall - leuchte - stimmungslicht - diffuser - ionisator | 21 | 770_salzkristall_leuchte_stimmungslicht_diffuser |
| 771 | aperio - sono - zusammenschnitt - videosicherung - eklig | 38 | 771_aperio_sono_zusammenschnitt_videosicherung |
| 772 | vdfr - plötz - bäcker - ect - raumluft | 41 | 772_vdfr_plötz_bäcker_ect |
| 773 | neutralität - silvester - österreich - bernadette - bestürzend | 46 | 773_neutralität_silvester_österreich_bernadette |
| 774 | ripple - cftc - sec - windows - investors | 45 | 774_ripple_cftc_sec_windows |
| 775 | gott - fabianer - density - erde - niemals | 149 | 775_gott_fabianer_density_erde |
| 776 | zensursicheren - förderer - heimatkurier - instagramwenn - rundbrief | 73 | 776_zensursicheren_förderer_heimatkurier_instagramwenn |
| 777 | ecoflow - delta - solarpanel - powerstation - heimwerkergeräte | 29 | 777_ecoflow_delta_solarpanel_powerstation |
| 778 | schuler - gleichschritt - ralf - doku - volkssport | 31 | 778_schuler_gleichschritt_ralf_doku |
| 779 | neuinfektionen - todesfälle - österreichweit - 24 - stunden | 33 | 779_neuinfektionen_todesfälle_österreichweit_24 |
| 780 | haritaki - libido - beschwerden - dmso - fruchtbarkeit | 41 | 780_haritaki_libido_beschwerden_dmso |
| 781 | shell - bmw - china - dax - opec | 152 | 781_shell_bmw_china_dax |
| 782 | lidl - fleisch - supermarktkette - produkten - discounter | 26 | 782_lidl_fleisch_supermarktkette_produkten |
| 783 | beete - gärventilschon - säuerlichen - aromatischen - karotten | 26 | 783_beete_gärventilschon_säuerlichen_aromatischen |
| 784 | ifo - versorgungskrise - industrie - produktion - laack | 81 | 784_ifo_versorgungskrise_industrie_produktion |
| 785 | de34 - 9544 - 9466 - 7016 - genodef1m03 | 32 | 785_de34_9544_9466_7016 |
| 786 | övp - sobotka - korruptionsuntersuchungsausschuss - bellen - befragungstag | 48 | 786_övp_sobotka_korruptionsuntersuchungsausschuss_bellen |
| 787 | eier - vogelgrippe - hühner - investment - eierfarm | 26 | 787_eier_vogelgrippe_hühner_investment |
| 788 | unserige - interviewgäste - geposteten - distanziere - haftungsausschluss | 21 | 788_unserige_interviewgäste_geposteten_distanziere |
| 789 | getestet - gebratwurstete - pcr - test - tests | 33 | 789_getestet_gebratwurstete_pcr_test |
| 790 | nrg - pasta - ölbasis - teflonbeschichtung - unterseite | 40 | 790_nrg_pasta_ölbasis_teflonbeschichtung |
| 791 | wärmflasche - wasserkanister - flaschen - herstellung - trinkwasserreserve | 30 | 791_wärmflasche_wasserkanister_flaschen_herstellung |
| 792 | nelson - fügsamen - kult - rebellieren - gesellschaft | 140 | 792_nelson_fügsamen_kult_rebellieren |
| 793 | laune - schlafsprachnachricht - mäckle - gute - 02 | 34 | 793_laune_schlafsprachnachricht_mäckle_gute |
| 794 | zdf - ard - zuschauer - bellut - intendant | 63 | 794_zdf_ard_zuschauer_bellut |
| 795 | gendersprache - gender - toiletten - supereinfach - zahnbürste | 35 | 795_gendersprache_gender_toiletten_supereinfach |
| 796 | fußballerische - sauerteigbroten - tymoschtschuk - speick - geißler | 31 | 796_fußballerische_sauerteigbroten_tymoschtschuk_speick |
| 797 | willst - verworfen - scheiss - allerseits - sanften | 108 | 797_willst_verworfen_scheiss_allerseits |
| 798 | bezahlt - erfolgreich - stellplätze - nessmuk - woodcraft | 71 | 798_bezahlt_erfolgreich_stellplätze_nessmuk |
| 799 | borstel - 750 - krankenhäuser - 34 - rwi | 45 | 799_borstel_750_krankenhäuser_34 |
| 800 | bezahlte - info - werbung - auf1 - folgen | 20 | 800_bezahlte_info_werbung_auf1 |
| 801 | rücktritte - raketenofen - 169 - supertopf - treuemonat | 25 | 801_rücktritte_raketenofen_169_supertopf |
| 802 | 6713 - aspkat2lxxx - 0058 - at50 - 0321 | 207 | 802_6713_aspkat2lxxx_0058_at50 |
| 803 | polizei - lka - einvernahme - polizisten - pirchner | 140 | 803_polizei_lka_einvernahme_polizisten |
| 804 | biontech - kontrollgruppe - herstellerangaben - fibrous - impfstoff | 71 | 804_biontech_kontrollgruppe_herstellerangaben_fibrous |
| 805 | thecrowhouse - crowhouse - busfahrer - linz - autocorso | 67 | 805_thecrowhouse_crowhouse_busfahrer_linz |
| 806 | suat - alter - gestorben - unerwartet - trauert | 39 | 806_suat_alter_gestorben_unerwartet |
| 807 | fpö - internet - russland - runet - paralympics | 72 | 807_fpö_internet_russland_runet |
| 808 | todesfälle - covid - daten - rki - virus | 146 | 808_todesfälle_covid_daten_rki |
| 809 | medizin - corih - dufayet - krenn - behandlungsverbund | 45 | 809_medizin_corih_dufayet_krenn |
| 810 | gas - industrie - fracking - produktion - energiekrise | 159 | 810_gas_industrie_fracking_produktion |
| 811 | impfpflicht - pflege - einrichtungsbezogene - kündigung - einrichtungen | 159 | 811_impfpflicht_pflege_einrichtungsbezogene_kündigung |
| 812 | nichttragens - verbeiten - laster - monithor - großflächig | 72 | 812_nichttragens_verbeiten_laster_monithor |
| 813 | diskriminierung - conco - filmisch - friedfertige - respekt | 45 | 813_diskriminierung_conco_filmisch_friedfertige |
| 814 | mockmill - getreidemühlen - elektrosmog - getreide - umgebung | 52 | 814_mockmill_getreidemühlen_elektrosmog_getreide |
| 815 | zion - devil - metal - grammys - cokes | 40 | 815_zion_devil_metal_grammys |
| 816 | miriam - 5777 - be80 - 2522 - hope | 116 | 816_miriam_5777_be80_2522 |
| 817 | eintopf - zukunftskonferenz - suppentopf - geräuchertem - herzhafter | 64 | 817_eintopf_zukunftskonferenz_suppentopf_geräuchertem |
| 818 | gegenuni - briefverkehr - aktualisierter - sommersemester - terheş | 60 | 818_gegenuni_briefverkehr_aktualisierter_sommersemester |
| 819 | antikörper - omicron - impfstoff - dosen - biontech | 67 | 819_antikörper_omicron_impfstoff_dosen |
| 820 | england - todesfälle - todesfällen - covid - myocarditis | 48 | 820_england_todesfälle_todesfällen_covid |
| 821 | denkt - dran - selbst - drin - kurzgeschichtenschreiber | 42 | 821_denkt_dran_selbst_drin |
| 822 | gettr - user - dns - com - checkmatenews | 26 | 822_gettr_user_dns_com |
| 823 | bewerbungen - abonnenten - erinnerung - ernstgemeinte - reset | 56 | 823_bewerbungen_abonnenten_erinnerung_ernstgemeinte |
| 824 | mütze - eingeweide - vertilgt - topfset - alpine | 40 | 824_mütze_eingeweide_vertilgt_topfset |
| 825 | covid - bourla - impfstoff - 19 - impfstoffe | 120 | 825_covid_bourla_impfstoff_19 |
| 826 | djokovic - novak - tennis - open - australian | 20 | 826_djokovic_novak_tennis_open |
| 827 | ñaupany - video - europe - puma - zelenko | 70 | 827_ñaupany_video_europe_puma |
| 828 | reichelt - chefredakteur - döpfner - springer - vernichtungsfeldzug | 138 | 828_reichelt_chefredakteur_döpfner_springer |
| 829 | haimbuchner - 1hmphkh69tm29hwcfmjdyyl1oxnsgaaitv - de65701204005184586005 - dieaktionaerin - strafanzeige | 57 | 829_haimbuchner_1hmphkh69tm29hwcfmjdyyl1oxnsgaaitv_de65701204005184586005_dieaktionaerin |
| 830 | aktivisten - busfahrer - klimaaktivisten - generation - zerren | 88 | 830_aktivisten_busfahrer_klimaaktivisten_generation |
| 831 | rtv - fake - doku - scharfmüller - kulissen | 149 | 831_rtv_fake_doku_scharfmüller |
| 832 | russland - sanktionen - eu - kaltblütiges - vergeltungspaket | 113 | 832_russland_sanktionen_eu_kaltblütiges |
| 833 | 92k - zahlen - krankenstände - abschiedssong - pädophilem | 87 | 833_92k_zahlen_krankenstände_abschiedssong |
| 834 | bitcoin - 19q8odiu2zar7dfl18ouqivwauvnripceu - 1wxoeuy6ghetkmurdiipllwvya1vh2iwa - core - 10514 | 47 | 834_bitcoin_19q8odiu2zar7dfl18ouqivwauvnripceu_1wxoeuy6ghetkmurdiipllwvya1vh2iwa_core |
| 835 | wildgebieten - selbstreinigend - alleskönner - absoluter - entwicklungsländern | 20 | 835_wildgebieten_selbstreinigend_alleskönner_absoluter |
| 836 | geometrie - heilige - grüße - liebe - denkbarrieren | 88 | 836_geometrie_heilige_grüße_liebe |
| 837 | swiss - kommissionspräsidentin - leyen - terhes - parlamentarier | 40 | 837_swiss_kommissionspräsidentin_leyen_terhes |
| 838 | wildgebieten - pump - selbstreinigend - alleskönner - absoluter | 20 | 838_wildgebieten_pump_selbstreinigend_alleskönner |
| 839 | lampenöl - autark - ausgießtülle - trichter - lagerbar | 25 | 839_lampenöl_autark_ausgießtülle_trichter |
| 840 | selbstverteidigungsschirm - alltagsgegenstand - unterliegt - gewöhnlichen - dach | 31 | 840_selbstverteidigungsschirm_alltagsgegenstand_unterliegt_gewöhnlichen |
| 841 | pumpernickel - 40er - roggenbrot - langzeithaltbarkeit - zurückblicken | 20 | 841_pumpernickel_40er_roggenbrot_langzeithaltbarkeit |
| 842 | slim - feldhose - fit - bdu - teesar | 42 | 842_slim_feldhose_fit_bdu |
| 843 | frisierter - veruntreut - umverteilt - manipulierter - epochaler | 35 | 843_frisierter_veruntreut_umverteilt_manipulierter |
| 844 | piexon - jpx6 - protector - jet - schuss | 25 | 844_piexon_jpx6_protector_jet |
| 845 | demonstrationen - soundbite - thema2 - politik3 - corona | 60 | 845_demonstrationen_soundbite_thema2_politik3 |
| 846 | schrank - läuft - monteur - straßenbahn - aufkleber | 217 | 846_schrank_läuft_monteur_straßenbahn |
| 847 | umrüstgasschlauch - propangasflaschen - 11kg - mehrwöchiger - widerstandsfähigen | 20 | 847_umrüstgasschlauch_propangasflaschen_11kg_mehrwöchiger |
| 848 | tarp - ultraleicht - geschichten - tarps - bushcrafter | 39 | 848_tarp_ultraleicht_geschichten_tarps |
| 849 | thais - chasing - vergewaltigt - horse - richterin | 56 | 849_thais_chasing_vergewaltigt_horse |
| 850 | gehirn - repräsentiert - herz - logan - mengele | 96 | 850_gehirn_repräsentiert_herz_logan |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: multilingual
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.6.1
* Transformers: 4.38.2
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
bbin2022/distilbert-base-uncased-finetuned-cola
|
bbin2022
| 2024-04-10T17:56:09Z | 117 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T17:52:05Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6442
- Matthews Correlation: 0.5173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4699 | 0.4371 |
| 0.4547 | 2.0 | 536 | 0.4912 | 0.4847 |
| 0.4547 | 3.0 | 804 | 0.5471 | 0.5056 |
| 0.236 | 4.0 | 1072 | 0.6429 | 0.5104 |
| 0.236 | 5.0 | 1340 | 0.6442 | 0.5173 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.0.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Weni/WeniGPT-Agents-Mistral-1.0.0-SFT
|
Weni
| 2024-04-10T17:55:40Z | 0 | 0 |
trl
|
[
"trl",
"safetensors",
"SFT",
"WeniGPT",
"pt",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | null | 2024-04-10T16:30:14Z |
---
license: mit
library_name: "trl"
tags:
- SFT
- WeniGPT
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Weni/WeniGPT-Agents-Mistral-1.0.0-SFT
results: []
language: ['pt']
---
# Weni/WeniGPT-Agents-Mistral-1.0.0-SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2] on the dataset Weni/wenigpt-agent-1.4.0 with the SFT trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
Description: Experiment with SFT and a new tokenizer configuration for chat template of mistral
It achieves the following results on the evaluation set:
{'eval_loss': 1.147840142250061, 'eval_runtime': 15.9821, 'eval_samples_per_second': 2.878, 'eval_steps_per_second': 1.439, 'epoch': 2.99}
## Intended uses & limitations
This model has not been trained to avoid specific intructions.
## Training procedure
Finetuning was done on the model mistralai/Mistral-7B-Instruct-v0.2 with the following prompt:
```
---------------------
System_prompt:
Agora você se chama {name}, você é {occupation} e seu objetivo é {chatbot_goal}. O adjetivo que mais define a sua personalidade é {adjective} e você se comporta da seguinte forma:
{instructions_formatted}
{context_statement}
Lista de requisitos:
- Responda de forma natural, mas nunca fale sobre um assunto fora do contexto.
- Nunca traga informações do seu próprio conhecimento.
- Repito é crucial que você responda usando apenas informações do contexto.
- Nunca mencione o contexto fornecido.
- Nunca mencione a pergunta fornecida.
- Gere a resposta mais útil possível para a pergunta usando informações do conexto acima.
- Nunca elabore sobre o porque e como você fez a tarefa, apenas responda.
---------------------
Question:
{question}
---------------------
Response:
{answer}
---------------------
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- per_device_train_batch_size: 2
- per_device_eval_batch_size: 2
- gradient_accumulation_steps: 2
- num_gpus: 1
- total_train_batch_size: 4
- optimizer: AdamW
- lr_scheduler_type: cosine
- num_steps: 312
- quantization_type: bitsandbytes
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']\n - task_type: CAUSAL_LM",)
### Training results
### Framework versions
- transformers==4.38.2
- datasets==2.18.0
- peft==0.10.0
- safetensors==0.4.2
- evaluate==0.4.1
- bitsandbytes==0.43
- huggingface_hub==0.22.2
- seqeval==1.2.2
- optimum==1.18.1
- auto-gptq==0.7.1
- gpustat==1.1.1
- deepspeed==0.14.0
- wandb==0.16.6
- trl==0.8.1
- accelerate==0.29.2
- coloredlogs==15.0.1
- traitlets==5.14.2
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.4/autoawq-0.2.4+cu118-cp310-cp310-linux_x86_64.whl
### Hardware
- Cloud provided: runpod.io
|
BuroIdentidadDigital/pasaporte_Mex_v0
|
BuroIdentidadDigital
| 2024-04-10T17:49:27Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-04-10T17:24:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unicamp-dl/ptt5-large-portuguese-vocab
|
unicamp-dl
| 2024-04-10T17:49:10Z | 2,089 | 10 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"arxiv:2008.09144",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
unicamp-dl/ptt5-small-portuguese-vocab
|
unicamp-dl
| 2024-04-10T17:49:02Z | 603 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"arxiv:2008.09144",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
Jehicoob/distilroberta-base-mrpc-glue-jehicoob
|
Jehicoob
| 2024-04-10T17:44:50Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T17:32:23Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrpc-glue-jehicoob
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-jehicoob
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4221
- Accuracy: 0.8284
- F1: 0.8746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5233 | 1.09 | 500 | 0.4221 | 0.8284 | 0.8746 |
| 0.3544 | 2.18 | 1000 | 0.8054 | 0.8186 | 0.8679 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ledmands/dqn_Pacman-v5_lrate5e-5_v2
|
ledmands
| 2024-04-10T17:40:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"ALE/Pacman-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-10T17:40:18Z |
---
library_name: stable-baselines3
tags:
- ALE/Pacman-v5
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ALE/Pacman-v5
type: ALE/Pacman-v5
metrics:
- type: mean_reward
value: 190.60 +/- 88.60
name: mean_reward
verified: false
---
# **DQN** Agent playing **ALE/Pacman-v5**
This is a trained model of a **DQN** agent playing **ALE/Pacman-v5**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Pacman-v5 -orga ledmands -f logs/
python -m rl_zoo3.enjoy --algo dqn --env ALE/Pacman-v5 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Pacman-v5 -orga ledmands -f logs/
python -m rl_zoo3.enjoy --algo dqn --env ALE/Pacman-v5 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env ALE/Pacman-v5 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Pacman-v5 -f logs/ -orga ledmands
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 5e-05),
('learning_starts', 100000),
('n_timesteps', 500000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
unicamp-dl/ptt5-base-t5-vocab
|
unicamp-dl
| 2024-04-10T17:39:41Z | 379 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"arxiv:2008.09144",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
unicamp-dl/ptt5-small-t5-vocab
|
unicamp-dl
| 2024-04-10T17:39:04Z | 388 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"arxiv:2008.09144",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
Ryu-m0m/Taxi-v3
|
Ryu-m0m
| 2024-04-10T17:30:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-10T16:55:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Ryu-m0m/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stablediffusionapi/ae-realistic-v6
|
stablediffusionapi
| 2024-04-10T17:27:40Z | 29 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-04-10T17:25:02Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ae-realistic-v6 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ae-realistic-v6"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/ae-realistic-v6)
Model link: [View model](https://modelslab.com/models/ae-realistic-v6)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ae-realistic-v6",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
satyanshu404/long-t5-local-base-finetuned-justification-v07
|
satyanshu404
| 2024-04-10T17:23:35Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/long-t5-local-base",
"base_model:finetune:google/long-t5-local-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-10T17:10:01Z |
---
license: apache-2.0
base_model: google/long-t5-local-base
tags:
- generated_from_trainer
model-index:
- name: long-t5-local-base-finetuned-justification-v07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-local-base-finetuned-justification-v07
This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.2301 | 1.0 | 676 | 2.8118 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
krnl/clip-vit-large-patch14
|
krnl
| 2024-04-10T17:17:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-04-10T17:09:51Z |
---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
### Model Date
January 2021
### Model Type
The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.
### Documents
- [Blog Post](https://openai.com/blog/clip/)
- [CLIP Paper](https://arxiv.org/abs/2103.00020)
### Use with Transformers
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
### Data Mission Statement
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
## Performance and Limitations
### Performance
We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
- Food101
- CIFAR10
- CIFAR100
- Birdsnap
- SUN397
- Stanford Cars
- FGVC Aircraft
- VOC2007
- DTD
- Oxford-IIIT Pet dataset
- Caltech101
- Flowers102
- MNIST
- SVHN
- IIIT5K
- Hateful Memes
- SST-2
- UCF101
- Kinetics700
- Country211
- CLEVR Counting
- KITTI Distance
- STL-10
- RareAct
- Flickr30
- MSCOCO
- ImageNet
- ImageNet-A
- ImageNet-R
- ImageNet Sketch
- ObjectNet (ImageNet Overlap)
- Youtube-BB
- ImageNet-Vid
## Limitations
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
### Bias and Fairness
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
## Feedback
### Where to send questions or comments about the model
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
|
neurips/llama-guard-finetuned-1ep-500
|
neurips
| 2024-04-10T17:08:50Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/LlamaGuard-7b",
"base_model:adapter:meta-llama/LlamaGuard-7b",
"license:llama2",
"region:us"
] | null | 2024-04-10T16:38:53Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/LlamaGuard-7b
model-index:
- name: llama-guard-finetuned-1ep-500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-guard-finetuned-1ep-500
This model is a fine-tuned version of [meta-llama/LlamaGuard-7b](https://huggingface.co/meta-llama/LlamaGuard-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 0.6
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.3.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
somosnlp/chaterapia_model
|
somosnlp
| 2024-04-10T17:07:13Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:google/gemma-2b-it",
"base_model:adapter:google/gemma-2b-it",
"region:us"
] | null | 2024-04-08T12:48:02Z |
---
library_name: peft
base_model: google/gemma-2b-it
---
### Model Description
Utilizacion para crear chatbots de asistencia terapeutica, para poder tener conversaciones en una situacion de necesidad
- **Developed by:** Julio Fullaondo Canga
- **Language(s) (NLP):** Español
- **Finetuned from model [optional]:** gemma-2b-it
- PEFT 0.10.0
|
dianamihalache27/TaskA_bertbase_uncased
|
dianamihalache27
| 2024-04-10T17:04:12Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:00:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: TaskA_bertbase_uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TaskA_bertbase_uncased
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_pythia-imdb-14m-mz-ada-v3-bs-4
|
AlignmentResearch
| 2024-04-10T17:02:38Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T17:02:25Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_pythia-imdb-14m-mz-ada-v3-bs-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-14m-mz-ada-v3-bs-4
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Juliofc/chaterapia_llama_model
|
Juliofc
| 2024-04-10T17:01:59Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-10T16:58:38Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
IndraneelKumar/results
|
IndraneelKumar
| 2024-04-10T17:00:01Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:legal_summarization",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-04-10T16:59:51Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
datasets:
- legal_summarization
base_model: microsoft/phi-2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the legal_summarization dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ShaneOD2611/detr-resnet-101_finetuned_CSGO
|
ShaneOD2611
| 2024-04-10T16:57:25Z | 188 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:ShaneOD2611/detr-resnet-101_finetuned_CSGO",
"base_model:finetune:ShaneOD2611/detr-resnet-101_finetuned_CSGO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-04-05T08:23:00Z |
---
license: apache-2.0
base_model: ShaneOD2611/detr-resnet-101_finetuned_CSGO
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-101_finetuned_CSGO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-101_finetuned_CSGO
This model is a fine-tuned version of [ShaneOD2611/detr-resnet-101_finetuned_CSGO](https://huggingface.co/ShaneOD2611/detr-resnet-101_finetuned_CSGO) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tomaszki/stablelm-21-b
|
tomaszki
| 2024-04-10T16:57:02Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-10T16:54:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VH1213141516/LAT_4-10sweep1_epsilon_1.5_time_limit_30000_N_checkpoints_50
|
VH1213141516
| 2024-04-10T16:53:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-10T16:53:37Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
VH1213141516/LAT_4-10sweep1_epsilon_5.0_time_limit_30000_N_checkpoints_50
|
VH1213141516
| 2024-04-10T16:53:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-10T16:53:27Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
VH1213141516/LAT_4-10sweep1_epsilon_8.5_time_limit_30000_N_checkpoints_50
|
VH1213141516
| 2024-04-10T16:53:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-10T16:53:24Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
VH1213141516/LAT_4-10sweep1_epsilon_1.0_time_limit_30000_N_checkpoints_50
|
VH1213141516
| 2024-04-10T16:53:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-10T16:53:15Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
Kijai/MagicTime-merged-fp16
|
Kijai
| 2024-04-10T16:51:08Z | 0 | 10 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-04-10T16:34:42Z |
---
license: apache-2.0
---
The files include MagicTime temporal lora merged into the animatediff v3 motion model, as well as their spatial lora converted to .safetensors format. Used together they work in ComfyUI and AnimatediffEvolved.
Original sources:
https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt
https://huggingface.co/BestWishYsh/MagicTime
|
denise227/amazon_kindle_sentiment_analysis_kaggle
|
denise227
| 2024-04-10T16:51:01Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T10:56:02Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: amazon_kindle_sentiment_analysis_kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_kindle_sentiment_analysis_kaggle
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5161
- eval_accuracy: 0.52
- eval_runtime: 29.8399
- eval_samples_per_second: 40.215
- eval_steps_per_second: 5.027
- epoch: 0.22
- step: 260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
CitizenKayne/distilbert-base-uncased-finetuned-emotion
|
CitizenKayne
| 2024-04-10T16:49:24Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T14:30:17Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.920547965574161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2549
- Accuracy: 0.9205
- F1: 0.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9026 | 1.0 | 250 | 0.3772 | 0.895 | 0.8937 |
| 0.3124 | 2.0 | 500 | 0.2549 | 0.9205 | 0.9205 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_NoQuant_torch.bfloat16_64_64_0.01_4_0.0002
|
ferrazzipietro
| 2024-04-10T16:49:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:48:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
m-a-p/MuPT-v1-8192-190M
|
m-a-p
| 2024-04-10T16:45:57Z | 596 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"music",
"art",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:54:39Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- music
- art
---
<div align="center">
<img src="Yi_logo.svg" width="150px" style="display: inline-block;">
<img src="m-a-p.png" width="150px" style="display: inline-block;">
</div>
## MuPT: Symbolic Music Generative Pre-trained Transformer
MuPT is a series of pre-trained models for symbolic music generation. It was trained on a large-scale dataset of symbolic music, including millions of monophonic and polyphonic pieces from different genres and styles. The models are trained with the LLama2 architecture, and can be further used for downstream music generation tasks such as melody generation, accompaniment generation, and multi-track music generation.
- 09/01/2024: a series of pre-trained MuPT models are released, with parameters ranging from 110M to 1.3B.
## Model architecture
The details of model architecture of MuPT-v1 are listed below:
| Name | Parameters | Training Data(Music Pieces) | Seq Length | Hidden Size | Layers | Heads |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| MuPT-v1-8192-110M | 110M | 7M x 8 epochs | 8192 | 768 | 12 | 12 |
| MuPT-v1-8192-345M | 345M | 7M x 6 epochs | 8192 | 1024 | 24 | 16 |
| MuPT-v1-8192-770M | 770M | 7M x 5 epochs | 8192 | 1280 | 36 | 20 |
| MuPT-v1-8192-1.3B | 1.3B | 7M x 8 epochs | 8192 | 1536 | 48 | 24 |
## Model Usage
#### Huggingface
##### Inference
```python
from transformers import AutoModelForCausalLM, AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("m-a-p/MuPT_v1_8192_110M",
trust_remote_code=True,
use_fast=False)
model = AutoModelForCausalLM.from_pretrained("m-a-p/MuPT_v1_8192_110mM").eval().half().cuda()
prefix = "X:1<n>L:1/8<n>Q:1/8=200<n>M:4/4<n>K:Gmin<n>|:\"Gm\" BGdB" # replace "\n" with "<n>" for all the MuPT-8192 models, but not for MuPT-4096 models
inputs = tokenizer(prefix, return_tensors="pt").to(model.device)
max_length = 256
outputs = model.generate(
inputs.input_ids,
max_length=max_length
)
outputs = tokenizer.decode(outputs[0])
print(outputs)
```
##### Post-processing
Since we merged multiple tracks into one track during training, we need to separate the outputs into standard ABC notation sequences. The post-processing code is as follows:
```python
import re
SEPARATORS = ['|', '|]', '||', '[|', '|:', ':|', '::']
SEP_DICT = {}
for i, sep in enumerate(SEPARATORS, start=1):
# E.g. ' | ': ' <1>'
SEP_DICT[' '+sep+' '] = f' <{i}>'
NEWSEP = '<|>'
def sep2tok(row):
for sep, tok in SEP_DICT.items():
row = row.replace(sep, tok+'<=> ')
return row
def tok2sep(bar):
for sep, tok in SEP_DICT.items():
bar = bar.replace(tok, sep)
return bar
def spacing(row):
for sep in SEPARATORS:
def subfunc(match):
symbol = [':', '|', ']']
if match.group(1) is None:
return f' {sep}'
elif match.group(1) in symbol:
return f' {sep}{match.group(1)}'
else:
return ' '+sep+' '+match.group(1)
pattern = r' ' + re.escape(sep) + r'(.{1})'
row = re.sub(pattern, subfunc, row)
row = row.replace('\n'+sep+'"', '\n '+sep+' "') # B \n|"A -> B \n | "A
row = row.replace(' '+sep+'\n', ' '+sep+' \n') # B |\n -> B | \n
return row
def decode(piece):
dec_piece = ''
idx = piece.find(' '+NEWSEP+' ')
heads = piece[:idx]
scores = piece[idx:]
scores_lst = re.split(' <\|>', scores)
all_bar_lst = []
for bar in scores_lst:
if bar == '':
continue
bar = sep2tok(bar)
bar_lst = re.split('<=>', bar)
bar_lst = list(map(tok2sep, bar_lst))
if len(all_bar_lst) == 0:
all_bar_lst = [[] for _ in range(len(bar_lst))]
for i in range(len(bar_lst)):
all_bar_lst[i].append(bar_lst[i])
if len(all_bar_lst) > 1:
# There might be the bar number like %30 at the end
# which need to be specially handled.
if len(all_bar_lst[0]) > len(all_bar_lst[1]):
last_bar_lst = all_bar_lst[0][-1].split()
all_bar_lst[0].pop()
for i in range(len(all_bar_lst)):
all_bar_lst[i].append(last_bar_lst[i])
# Add the remaining symbols to the last row.
if i == len(all_bar_lst) - 1:
for j in range(i+1, len(last_bar_lst)):
all_bar_lst[i][-1] += ' ' + last_bar_lst[j]
# Ensure the lengths are consistent.
length = len(all_bar_lst[0])
for lst in all_bar_lst[1:]:
# assert len(lst) == length
pass
dec_piece += heads
for i in range(len(all_bar_lst)):
if len(all_bar_lst) > 1:
dec_piece += f'V:{i+1}\n'
dec_piece += ''.join(all_bar_lst[i])
dec_piece += '\n'
# Remove redundant spaces.
dec_piece = re.sub(' {2,}', ' ', dec_piece)
return dec_piece
```
Processed Output:
```shell
X:1
L:1/8
Q:1/8=200
M:4/4<n>K:Gmin
|:\"Gm\" BGdB fdBG |\"F\" AFcF dFcF |\"Gm\" BGdG gFBF |\"F\" AFAG AF F2 |\"Gm\" BGBd fffd |\"F\" cdcB cdeg |
\"Gm\" fdcB\"Eb\" AFcA |1 BGFG\"F\" AFGc :|2 BGFG\"F\" AF F2 ||
```
Once you encode the post-processed ABC notation into audio, you will hear the following music.
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/640701cb4dc5f2846c91d4eb/gnBULaFjcUyXYzzIwXLZq.mpga"></audio>
#### Megatron-LM
We now the provide usage based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/main).
Before starting, make sure you have setup the relevant environment and codebase.
```shell
# pull Megatron-LM codebase
mkdir -p /path/to/workspace && cd /path/to/workspace
git clone https://github.com/NVIDIA/Megatron-LM.git
# download the pre-trained MuPT models checkpoint and vocab files from Huggingface page
mkdir -p /models/MuPT_v0_8192_1.3B && cd /models/MuPT_v0_8192_1.3B
wget -O model_optim_rng.pt https://huggingface.co/m-a-p/MuPT_v0_8192_1.3B/resolve/main/model_optim_rng.pt?download=true
wget -O newline.vocab https://huggingface.co/m-a-p/MuPT_v0_8192_1.3B/resolve/main/newline.vocab?download=true
wget -O newline.txt https://huggingface.co/m-a-p/MuPT_v0_8192_1.3B/resolve/main/newline.txt?download=true
```
We recommend using the latest version of [NGC's PyTorch container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for MuPT inference. See more details in [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/main)
```shell
# pull the latest NGC's PyTorch container, mount the workspace directory and enter the container
docker run --gpus all -it --name megatron --shm-size=16g -v $PWD:/workspace -p 5000:5000 nvcr.io/nvidia/pytorch:23.11-py3 /bin/bash
```
Once you enter the container, you can start a REST server for inference.
<details>
<summary>Click to expand the example script</summary>
#!/bin/bash
# This example will start serving the 1.3B model.
export CUDA_DEVICE_MAX_CONNECTIONS=1
DISTRIBUTED_ARGS="--nproc_per_node 1 \
--nnodes 1 \
--node_rank 0 \
--master_addr localhost \
--master_port 6000"
CHECKPOINT=/path/to/model/checkpoint/folder
VOCAB_FILE=/path/to/vocab/file
MERGE_FILE=/path/to/merge/file
MODEL_SIZE="1.3B"
if [[ ${MODEL_SIZE} == "110M" ]]; then HIDDEN_SIZE=768; NUM_HEAD=12; NUM_QUERY_GROUP=12; NUM_LAYERS=12; FFN_HIDDEN_SIZE=3072; NORM_EPS=1e-5;
elif [[ ${MODEL_SIZE} == "345M" ]]; then HIDDEN_SIZE=1024; NUM_HEAD=16; NUM_QUERY_GROUP=16; NUM_LAYERS=24; FFN_HIDDEN_SIZE=4096; NORM_EPS=1e-5;
elif [[ ${MODEL_SIZE} == "770M" ]]; then HIDDEN_SIZE=1280; NUM_HEAD=20; NUM_QUERY_GROUP=20; NUM_LAYERS=36; FFN_HIDDEN_SIZE=5120; NORM_EPS=1e-5;
elif [[ ${MODEL_SIZE} == "1.3B" ]]; then HIDDEN_SIZE=1536; NUM_HEAD=24; NUM_QUERY_GROUP=24; NUM_LAYERS=48; FFN_HIDDEN_SIZE=6144; NORM_EPS=1e-5;
else echo "invalid MODEL_SIZE: ${MODEL_SIZE}"; exit 1
fi
MAX_SEQ_LEN=8192
MAX_POSITION_EMBEDDINGS=8192
pip install flask-restful
torchrun $DISTRIBUTED_ARGS tools/run_text_generation_server.py \
--tensor-model-parallel-size 1 \
--pipeline-model-parallel-size 1 \
--num-layers ${NUM_LAYERS} \
--hidden-size ${HIDDEN_SIZE} \
--ffn-hidden-size ${FFN_HIDDEN_SIZE} \
--load ${CHECKPOINT} \
--group-query-attention \
--num-query-groups ${NUM_QUERY_GROUP} \
--position-embedding-type rope \
--num-attention-heads ${NUM_HEAD} \
--max-position-embeddings ${MAX_POSITION_EMBEDDINGS} \
--tokenizer-type GPT2BPETokenizer \
--normalization RMSNorm \
--norm-epsilon ${NORM_EPS} \
--make-vocab-size-divisible-by 1 \
--swiglu \
--use-flash-attn \
--bf16 \
--micro-batch-size 1 \
--disable-bias-linear \
--no-bias-gelu-fusion \
--untie-embeddings-and-output-weights \
--seq-length ${MAX_SEQ_LEN} \
--vocab-file $VOCAB_FILE \
--merge-file $MERGE_FILE \
--attention-dropout 0.0 \
--hidden-dropout 0.0 \
--weight-decay 1e-1 \
--clip-grad 1.0 \
--adam-beta1 0.9 \
--adam-beta2 0.95 \
--adam-eps 1e-8 \
--seed 42
</details>
Use CURL to query the server directly, note that the newline token `\n` is represented by `<n>` in the vocabulary, so we need to replace the newline token with `<n>` in both the prompt and the generated tokens.
```shell
curl 'http://localhost:6000/api' -X 'PUT' -H 'Content-Type: application/json; charset=UTF-8' -d '{"prompts":["X:1<n>L:1/8<n>Q:1/8=200<n>M:4/4<n>K:Gmin<n>|:\"Gm\" BGdB"], "tokens_to_generate":4096}'
```
Processed Output:
```shell
X:1
L:1/8
Q:1/8=200
M:4/4<n>K:Gmin
|:\"Gm\" BGdB fdBG |\"F\" AFcF dFcF |\"Gm\" BGdG gFBF |\"F\" AFAG AF F2 |\"Gm\" BGBd fffd |\"F\" cdcB cdeg |
\"Gm\" fdcB\"Eb\" AFcA |1 BGFG\"F\" AFGc :|2 BGFG\"F\" AF F2 ||
```
|
m-a-p/MuPT-v1-8192-1.07B
|
m-a-p
| 2024-04-10T16:45:45Z | 174 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"music",
"art",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:53:55Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- music
- art
---
<div align="center">
<img src="Yi_logo.svg" width="150px" style="display: inline-block;">
<img src="m-a-p.png" width="150px" style="display: inline-block;">
</div>
## MuPT: Symbolic Music Generative Pre-trained Transformer
MuPT is a series of pre-trained models for symbolic music generation. It was trained on a large-scale dataset of symbolic music, including millions of monophonic and polyphonic pieces from different genres and styles. The models are trained with the LLama2 architecture, and can be further used for downstream music generation tasks such as melody generation, accompaniment generation, and multi-track music generation.
- 09/01/2024: a series of pre-trained MuPT models are released, with parameters ranging from 110M to 1.3B.
## Model architecture
The details of model architecture of MuPT-v1 are listed below:
| Name | Parameters | Training Data(Music Pieces) | Seq Length | Hidden Size | Layers | Heads |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| MuPT-v1-8192-110M | 110M | 7M x 8 epochs | 8192 | 768 | 12 | 12 |
| MuPT-v1-8192-345M | 345M | 7M x 6 epochs | 8192 | 1024 | 24 | 16 |
| MuPT-v1-8192-770M | 770M | 7M x 5 epochs | 8192 | 1280 | 36 | 20 |
| MuPT-v1-8192-1.3B | 1.3B | 7M x 8 epochs | 8192 | 1536 | 48 | 24 |
## Model Usage
#### Huggingface
##### Inference
```python
from transformers import AutoModelForCausalLM, AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("m-a-p/MuPT_v1_8192_770M",
trust_remote_code=True,
use_fast=False)
model = AutoModelForCausalLM.from_pretrained("m-a-p/MuPT_v1_8192_770M").eval().half().cuda()
prefix = "X:1<n>L:1/8<n>Q:1/8=200<n>M:4/4<n>K:Gmin<n>|:\"Gm\" BGdB" # replace "\n" with "<n>" for all the MuPT-8192 models, but not for MuPT-4096 models
inputs = tokenizer(prefix, return_tensors="pt").to(model.device)
max_length = 256
outputs = model.generate(
inputs.input_ids,
max_length=max_length
)
outputs = tokenizer.decode(outputs[0])
print(outputs)
```
##### Post-processing
Since we merged multiple tracks into one track during training, we need to separate the outputs into standard ABC notation sequences. The post-processing code is as follows:
```python
import re
SEPARATORS = ['|', '|]', '||', '[|', '|:', ':|', '::']
SEP_DICT = {}
for i, sep in enumerate(SEPARATORS, start=1):
# E.g. ' | ': ' <1>'
SEP_DICT[' '+sep+' '] = f' <{i}>'
NEWSEP = '<|>'
def sep2tok(row):
for sep, tok in SEP_DICT.items():
row = row.replace(sep, tok+'<=> ')
return row
def tok2sep(bar):
for sep, tok in SEP_DICT.items():
bar = bar.replace(tok, sep)
return bar
def spacing(row):
for sep in SEPARATORS:
def subfunc(match):
symbol = [':', '|', ']']
if match.group(1) is None:
return f' {sep}'
elif match.group(1) in symbol:
return f' {sep}{match.group(1)}'
else:
return ' '+sep+' '+match.group(1)
pattern = r' ' + re.escape(sep) + r'(.{1})'
row = re.sub(pattern, subfunc, row)
row = row.replace('\n'+sep+'"', '\n '+sep+' "') # B \n|"A -> B \n | "A
row = row.replace(' '+sep+'\n', ' '+sep+' \n') # B |\n -> B | \n
return row
def decode(piece):
dec_piece = ''
idx = piece.find(' '+NEWSEP+' ')
heads = piece[:idx]
scores = piece[idx:]
scores_lst = re.split(' <\|>', scores)
all_bar_lst = []
for bar in scores_lst:
if bar == '':
continue
bar = sep2tok(bar)
bar_lst = re.split('<=>', bar)
bar_lst = list(map(tok2sep, bar_lst))
if len(all_bar_lst) == 0:
all_bar_lst = [[] for _ in range(len(bar_lst))]
for i in range(len(bar_lst)):
all_bar_lst[i].append(bar_lst[i])
if len(all_bar_lst) > 1:
# There might be the bar number like %30 at the end
# which need to be specially handled.
if len(all_bar_lst[0]) > len(all_bar_lst[1]):
last_bar_lst = all_bar_lst[0][-1].split()
all_bar_lst[0].pop()
for i in range(len(all_bar_lst)):
all_bar_lst[i].append(last_bar_lst[i])
# Add the remaining symbols to the last row.
if i == len(all_bar_lst) - 1:
for j in range(i+1, len(last_bar_lst)):
all_bar_lst[i][-1] += ' ' + last_bar_lst[j]
# Ensure the lengths are consistent.
length = len(all_bar_lst[0])
for lst in all_bar_lst[1:]:
# assert len(lst) == length
pass
dec_piece += heads
for i in range(len(all_bar_lst)):
if len(all_bar_lst) > 1:
dec_piece += f'V:{i+1}\n'
dec_piece += ''.join(all_bar_lst[i])
dec_piece += '\n'
# Remove redundant spaces.
dec_piece = re.sub(' {2,}', ' ', dec_piece)
return dec_piece
```
Processed Output:
```shell
X:1
L:1/8
Q:1/8=200
M:4/4<n>K:Gmin
|:\"Gm\" BGdB fdBG |\"F\" AFcF dFcF |\"Gm\" BGdG gFBF |\"F\" AFAG AF F2 |\"Gm\" BGBd fffd |\"F\" cdcB cdeg |
\"Gm\" fdcB\"Eb\" AFcA |1 BGFG\"F\" AFGc :|2 BGFG\"F\" AF F2 ||
```
Once you encode the post-processed ABC notation into audio, you will hear the following music.
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/640701cb4dc5f2846c91d4eb/gnBULaFjcUyXYzzIwXLZq.mpga"></audio>
#### Megatron-LM
We now the provide usage based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/main).
Before starting, make sure you have setup the relevant environment and codebase.
```shell
# pull Megatron-LM codebase
mkdir -p /path/to/workspace && cd /path/to/workspace
git clone https://github.com/NVIDIA/Megatron-LM.git
# download the pre-trained MuPT models checkpoint and vocab files from Huggingface page
mkdir -p /models/MuPT_v0_8192_1.3B && cd /models/MuPT_v0_8192_1.3B
wget -O model_optim_rng.pt https://huggingface.co/m-a-p/MuPT_v0_8192_1.3B/resolve/main/model_optim_rng.pt?download=true
wget -O newline.vocab https://huggingface.co/m-a-p/MuPT_v0_8192_1.3B/resolve/main/newline.vocab?download=true
wget -O newline.txt https://huggingface.co/m-a-p/MuPT_v0_8192_1.3B/resolve/main/newline.txt?download=true
```
We recommend using the latest version of [NGC's PyTorch container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for MuPT inference. See more details in [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/main)
```shell
# pull the latest NGC's PyTorch container, mount the workspace directory and enter the container
docker run --gpus all -it --name megatron --shm-size=16g -v $PWD:/workspace -p 5000:5000 nvcr.io/nvidia/pytorch:23.11-py3 /bin/bash
```
Once you enter the container, you can start a REST server for inference.
<details>
<summary>Click to expand the example script</summary>
#!/bin/bash
# This example will start serving the 1.3B model.
export CUDA_DEVICE_MAX_CONNECTIONS=1
DISTRIBUTED_ARGS="--nproc_per_node 1 \
--nnodes 1 \
--node_rank 0 \
--master_addr localhost \
--master_port 6000"
CHECKPOINT=/path/to/model/checkpoint/folder
VOCAB_FILE=/path/to/vocab/file
MERGE_FILE=/path/to/merge/file
MODEL_SIZE="1.3B"
if [[ ${MODEL_SIZE} == "110M" ]]; then HIDDEN_SIZE=768; NUM_HEAD=12; NUM_QUERY_GROUP=12; NUM_LAYERS=12; FFN_HIDDEN_SIZE=3072; NORM_EPS=1e-5;
elif [[ ${MODEL_SIZE} == "345M" ]]; then HIDDEN_SIZE=1024; NUM_HEAD=16; NUM_QUERY_GROUP=16; NUM_LAYERS=24; FFN_HIDDEN_SIZE=4096; NORM_EPS=1e-5;
elif [[ ${MODEL_SIZE} == "770M" ]]; then HIDDEN_SIZE=1280; NUM_HEAD=20; NUM_QUERY_GROUP=20; NUM_LAYERS=36; FFN_HIDDEN_SIZE=5120; NORM_EPS=1e-5;
elif [[ ${MODEL_SIZE} == "1.3B" ]]; then HIDDEN_SIZE=1536; NUM_HEAD=24; NUM_QUERY_GROUP=24; NUM_LAYERS=48; FFN_HIDDEN_SIZE=6144; NORM_EPS=1e-5;
else echo "invalid MODEL_SIZE: ${MODEL_SIZE}"; exit 1
fi
MAX_SEQ_LEN=8192
MAX_POSITION_EMBEDDINGS=8192
pip install flask-restful
torchrun $DISTRIBUTED_ARGS tools/run_text_generation_server.py \
--tensor-model-parallel-size 1 \
--pipeline-model-parallel-size 1 \
--num-layers ${NUM_LAYERS} \
--hidden-size ${HIDDEN_SIZE} \
--ffn-hidden-size ${FFN_HIDDEN_SIZE} \
--load ${CHECKPOINT} \
--group-query-attention \
--num-query-groups ${NUM_QUERY_GROUP} \
--position-embedding-type rope \
--num-attention-heads ${NUM_HEAD} \
--max-position-embeddings ${MAX_POSITION_EMBEDDINGS} \
--tokenizer-type GPT2BPETokenizer \
--normalization RMSNorm \
--norm-epsilon ${NORM_EPS} \
--make-vocab-size-divisible-by 1 \
--swiglu \
--use-flash-attn \
--bf16 \
--micro-batch-size 1 \
--disable-bias-linear \
--no-bias-gelu-fusion \
--untie-embeddings-and-output-weights \
--seq-length ${MAX_SEQ_LEN} \
--vocab-file $VOCAB_FILE \
--merge-file $MERGE_FILE \
--attention-dropout 0.0 \
--hidden-dropout 0.0 \
--weight-decay 1e-1 \
--clip-grad 1.0 \
--adam-beta1 0.9 \
--adam-beta2 0.95 \
--adam-eps 1e-8 \
--seed 42
</details>
Use CURL to query the server directly, note that the newline token `\n` is represented by `<n>` in the vocabulary, so we need to replace the newline token with `<n>` in both the prompt and the generated tokens.
```shell
curl 'http://localhost:6000/api' -X 'PUT' -H 'Content-Type: application/json; charset=UTF-8' -d '{"prompts":["X:1<n>L:1/8<n>Q:1/8=200<n>M:4/4<n>K:Gmin<n>|:\"Gm\" BGdB"], "tokens_to_generate":4096}'
```
Processed Output:
```shell
X:1
L:1/8
Q:1/8=200
M:4/4<n>K:Gmin
|:\"Gm\" BGdB fdBG |\"F\" AFcF dFcF |\"Gm\" BGdG gFBF |\"F\" AFAG AF F2 |\"Gm\" BGBd fffd |\"F\" cdcB cdeg |
\"Gm\" fdcB\"Eb\" AFcA |1 BGFG\"F\" AFGc :|2 BGFG\"F\" AF F2 ||
```
|
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_NoQuant_torch.bfloat16_64_64_0.01_2_0.0002
|
ferrazzipietro
| 2024-04-10T16:44:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:43:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mohitcr1/mistral_instruct_generation
|
Mohitcr1
| 2024-04-10T16:42:17Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-02-20T18:07:20Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1508 | 0.67 | 20 | 1.0891 |
| 0.9774 | 1.33 | 40 | 1.0043 |
| 0.9105 | 2.0 | 60 | 0.9710 |
| 0.8641 | 2.67 | 80 | 0.9539 |
| 0.8492 | 3.33 | 100 | 0.9395 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2
|
PasinduProjects/criminal-case-RoBERTa3
|
PasinduProjects
| 2024-04-10T16:34:20Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T06:43:38Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: criminal-case-RoBERTa3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# criminal-case-RoBERTa3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5650
- Accuracy: 0.7488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6055 | 0.02 | 10 | 0.6017 | 0.7488 |
| 0.6904 | 0.04 | 20 | 0.5712 | 0.7488 |
| 0.608 | 0.06 | 30 | 0.5650 | 0.7488 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_NoQuant_torch.bfloat16_64_32_0.01_4_0.0002
|
ferrazzipietro
| 2024-04-10T16:34:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:33:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JBERN29/code-search-net-tokenizer
|
JBERN29
| 2024-04-10T16:31:43Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:31:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DuongTrongChi/facebook-commet-classification-base
|
DuongTrongChi
| 2024-04-10T16:28:49Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:uitnlp/visobert",
"base_model:finetune:uitnlp/visobert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T16:28:30Z |
---
base_model: uitnlp/visobert
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: facebook-commet-classification-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook-commet-classification-base
This model is a fine-tuned version of [uitnlp/visobert](https://huggingface.co/uitnlp/visobert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0642
- Accuracy: 0.9830
- F1: 0.9568
- Precision: 0.9441
- Recall: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0616 | 1.0 | 2376 | 0.0642 | 0.9830 | 0.9568 | 0.9441 | 0.9698 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
sherelyn912/emotional-distilbert-3
|
sherelyn912
| 2024-04-10T16:27:16Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T14:26:43Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: emotional-distilbert-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotional-distilbert-3
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1515
- Accuracy: 0.4428
- F1: 0.4258
- Precision: 0.4408
- Recall: 0.4428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 2.7857 | 1.0 | 270 | 2.7060 | 0.2505 | 0.2301 | 0.3881 | 0.2505 |
| 1.4183 | 2.0 | 540 | 2.1665 | 0.3693 | 0.3674 | 0.4295 | 0.3693 |
| 0.621 | 3.0 | 810 | 1.8691 | 0.4419 | 0.4343 | 0.4545 | 0.4419 |
| 0.2352 | 4.0 | 1080 | 1.8406 | 0.4401 | 0.4333 | 0.4571 | 0.4401 |
| 0.0816 | 5.0 | 1350 | 1.9892 | 0.4474 | 0.4335 | 0.4518 | 0.4474 |
| 0.0284 | 6.0 | 1620 | 2.1080 | 0.4347 | 0.4224 | 0.4365 | 0.4347 |
| 0.0141 | 7.0 | 1890 | 2.1515 | 0.4428 | 0.4258 | 0.4408 | 0.4428 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
astro21/pix2struct-base-Sci
|
astro21
| 2024-04-10T16:27:03Z | 50 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-04-10T16:16:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_NoQuant_torch.bfloat16_32_64_0.01_8_0.0002
|
ferrazzipietro
| 2024-04-10T16:24:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:23:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rahul13/llama-2-7b-qna-tuned
|
Rahul13
| 2024-04-10T16:21:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2024-04-08T13:58:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Aviral2412/wav2vec2-base-timit-demo-google-colab
|
Aviral2412
| 2024-04-10T16:19:58Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:19:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_NoQuant_torch.bfloat16_32_64_0.01_4_0.0002
|
ferrazzipietro
| 2024-04-10T16:19:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:19:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luisvarona/Deteccion_sonido
|
luisvarona
| 2024-04-10T16:12:39Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-04-10T16:12:35Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Ryu-m0m/q-FrozenLake-v1-4x4-noSlippery
|
Ryu-m0m
| 2024-04-10T16:12:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-04-10T15:54:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Ryu-m0m/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ferrazzipietro/Qwen1.5-7B-Chat__adapters_en.layer1_NoQuant_torch.bfloat16_32_32_0.01_8_0.0002
|
ferrazzipietro
| 2024-04-10T16:10:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T16:10:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imadbekkouch/medieval_music_yolov8
|
imadbekkouch
| 2024-04-10T16:06:59Z | 0 | 0 | null |
[
"onnx",
"image-segmentation",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-04-10T16:05:07Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- onnx
---
|
AlignmentResearch/robust_llm_pythia-imdb-160m-mz-ada-v3-bs-16
|
AlignmentResearch
| 2024-04-10T16:06:43Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-10T16:06:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-160m
model-index:
- name: robust_llm_pythia-imdb-160m-mz-ada-v3-bs-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-160m-mz-ada-v3-bs-16
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Andresmfs/aguila-es-inclusivo-adapters_v3
|
Andresmfs
| 2024-04-10T16:02:06Z | 0 | 0 | null |
[
"tensorboard",
"translation",
"generated_from_trainer",
"region:us"
] |
translation
| 2024-04-10T11:10:52Z |
---
tags:
- translation
- generated_from_trainer
model-index:
- name: aguila-es-inclusivo-adapters_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es-inclusivo-translator
This model is a fine-tuned version of [projecte-aina/aguila-7b](https://huggingface.co/projecte-aina/aguila-7b) on the dataset [somosnlp/es-inclusive-language](https://huggingface.co/datasets/somosnlp/es-inclusive-language).
Languages are powerful tools to communicate ideas, but their use is not impartial. The selection of words carries inherent biases and reflects subjective perspectives. In some cases, language is wielded to enforce ideologies, marginalize certain groups, or promote specific political agendas.
Spanish is not the exception to that. For instance, when we say “los alumnos” or “los ingenieros”, we are excluding women from those groups. Similarly, expressions such as “los gitanos” o “los musulmanes” perpetuate discrimination against these communities.
In response to these linguistic challenges, this model offers a way to construct inclusive alternatives in accordance with official guidelines on inclusive language from various Spanish speaking countries. Its purpose is to provide grammatically correct and inclusive solutions to situations where our language choices might otherwise be exclusive.
This is a tool that contributes to the fifth of the Sustainable Development Goals: Achieve gender equality and empower all women and girls.
The model works in such a way that, given an input text, it returns the original text rewritten using inclusive language.
It achieves the following results on the evaluation set:
- Loss: 0.6030
## Model description
- **Developed by**: [Andrés Martínez Fernández-Salguero](https://huggingface.co/Andresmfs) (andresmfs), Imanuel Rozenberg (manu_20392), Gaia Quintana Fleitas (gaiaq), Josué Sauca (josue_sauca), Miguel López (wizmik12)
- **Language(s)**: Spanish
- **Fine-tuned from the model**: [projecte-aina/aguila-7b](https://huggingface.co/projecte-aina/aguila-7b)
- **Licence**: cc-by-nc-sa-4.0
## Social Impact
An inclusive translator holds significant social impact by promoting equity and representation within texts. By rectifying biases ingrained in language and fostering inclusivity, it combats discrimination, amplifies the visibility of marginalized groups, and contributes to the cultivation of a more inclusive and respectful society.
This is a tool that contributes to the fifth of the Sustainable Development Goals: Achieve gender equality and empower all women and girls.
## Intended uses & limitations
More information needed
### How to use
Here is how to use this model:
~~~
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
import torch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('somosnlp/', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('somosnlp/', trust_remote_code=True,
quantization_config=bnb_config,
device_map="auto")
# generation_config
generation_config = model.generation_config
generation_config.max_new_tokens = 100
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
# Define inference function
def translate_es_inclusivo(exclusive_text):
# generate input prompt
eval_prompt = f"""Reescribe el siguiente texto utilizando lenguaje inclusivo.\n
Texto: {exclusive_text}\n
Texto en lenguaje inclusivo:"""
# tokenize input
model_input = tokenizer(eval_prompt, return_tensors="pt").to(model.device)
# set max_new_tokens if necessary
if len(model_input['input_ids'][0]) > 80:
model.generation_config.max_new_tokens = len(model_input['input_ids'][0]) + 0.2 * len(model_input['input_ids'][0])
# get length of encoded prompt
prompt_token_len = len(model_input['input_ids'][0])
# generate and decode
with torch.no_grad():
inclusive_text = tokenizer.decode(model.generate(**model_input, generation_config=generation_config)[0][prompt_token_len:],
skip_special_tokens=True)
return inclusive_text
##########
input_text = 'Los alumnos atienden a sus profesores'
print(translate_es_inclusivo(input_text))
~~~
As it is a heavy model, you may want to use it in 4-bits:
~~~
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
from transformers import BitsAndBytesConfig
import torch
## Load model in 4bits
# bnb_configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=False)
# model
model = AutoModelForCausalLM.from_pretrained('somosnlp/', trust_remote_code=True,
quantization_config=bnb_config,
device_map="auto")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained('somosnlp/', trust_remote_code=True)
# generation_config
generation_config = model.generation_config
generation_config.max_new_tokens = 100
generation_config.temperature = 0.7
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
# Define inference function
def translate_es_inclusivo(exclusive_text):
# generate input prompt
eval_prompt = f"""Reescribe el siguiente texto utilizando lenguaje inclusivo.\n
Texto: {exclusive_text}\n
Texto en lenguaje inclusivo:"""
# tokenize input
model_input = tokenizer(eval_prompt, return_tensors="pt").to(model.device)
# set max_new_tokens if necessary
if len(model_input['input_ids'][0]) > 80:
model.generation_config.max_new_tokens = len(model_input['input_ids'][0]) + 0.2 * len(model_input['input_ids'][0])
# get length of encoded prompt
prompt_token_len = len(model_input['input_ids'][0])
# generate and decode
with torch.no_grad():
inclusive_text = tokenizer.decode(model.generate(**model_input, generation_config=generation_config)[0][prompt_token_len:],
skip_special_tokens=True)
return inclusive_text
##########
input_text = 'Los alumnos atienden a sus profesores'
print(translate_es_inclusivo(input_text))
~~~
## Training and evaluation data
Training and evaluation data can be found in [somosnlp/es-inclusive-language](https://huggingface.co/datasets/somosnlp/es-inclusive-language)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 402 | 0.8020 |
| 1.0274 | 2.0 | 804 | 0.7019 |
| 0.6745 | 3.0 | 1206 | 0.6515 |
| 0.5826 | 4.0 | 1608 | 0.6236 |
| 0.5104 | 5.0 | 2010 | 0.6161 |
| 0.5104 | 6.0 | 2412 | 0.6149 |
| 0.4579 | 7.0 | 2814 | 0.6030 |
| 0.4255 | 8.0 | 3216 | 0.6151 |
| 0.3898 | 9.0 | 3618 | 0.6209 |
| 0.3771 | 10.0 | 4020 | 0.6292 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.