modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 18:27:06
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 18:26:56
card
stringlengths
11
1.01M
hudsiop/llama32-1b-wikitext2-distilled-v2
hudsiop
2025-06-02T14:01:58Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-02T12:56:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ganesh004/q-taxi-v3
ganesh004
2025-06-02T13:02:19Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-02T13:02:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ganesh004/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hubble658/v3.1-deneme-1
hubble658
2025-06-02T12:52:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-VL-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-02T12:50:44Z
--- base_model: unsloth/Qwen2.5-VL-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hubble658 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
googlepaycloneapp/googlepaycloneapp
googlepaycloneapp
2025-06-02T12:20:59Z
0
0
null
[ "region:us" ]
null
2025-06-02T12:19:59Z
# google pay clone app **[google pay clone app](http://omninos.com/google-pay-app-clone-development/)** The rise of digital payment platforms has revolutionized financial transactions, with Google Pay leading the charge due to its seamless user experience, robust security, and versatile features. Creating a Google Pay clone app involves replicating its core functionalities while ensuring scalability, security, and compliance with financial regulations. This 1000-word article delves into the essential components, technical requirements, development process, challenges, and future potential of building a Google Pay clone app. ## Understanding Google Pay’s Core Features To create a Google Pay clone, developers must prioritize features that define its functionality and appeal. These include: Mobile Payments: Users can send and receive money instantly using phone numbers, email addresses, or QR codes. This requires integration with payment systems like Unified Payments Interface (UPI) in India or global alternatives like ACH or SEPA for real-time transfers. Bill Payments and Recharges: The app should allow users to pay utility bills, mobile recharges, and subscriptions. This necessitates partnerships with service providers and APIs to fetch bill details and process payments. Contactless Payments: Support for NFC-based tap-to-pay at POS terminals is critical. This involves tokenization to secure card details and compatibility with devices supporting NFC hardware. Transaction History and Analytics: A detailed log of transactions, categorized by type and date, enhances user trust. This requires a robust backend to store and retrieve data securely. Rewards and Cashback: Google Pay’s loyalty programs, such as cashback and scratch cards, drive user engagement. Implementing gamification elements and tracking user activity are key to replicating this. Bank Account Integration: Users should link multiple bank accounts or cards, requiring secure authentication mechanisms like OAuth 2.0 and compliance with banking regulations. Multi-Factor Authentication: Features like biometric authentication (fingerprint or face ID) and PINs ensure secure access, while push notifications keep users informed of transactions. Merchant Payments: The app should support payments to merchants via QR codes or online gateways, integrating with e-commerce platforms for seamless checkout. These features form the backbone of a Google Pay clone, ensuring it meets user expectations for convenience and reliability. ## Technology Stack for Development Selecting an appropriate technology stack is crucial for performance, scalability, and user experience. Here’s a recommended stack: Frontend: React Native or Flutter for cross-platform development, ensuring a consistent UI/UX on iOS and Android. These frameworks offer reusable components and fast rendering for a responsive interface. Backend: Node.js with Express or Django with Python for building RESTful APIs. These handle user authentication, payment processing, and data management efficiently. Database: PostgreSQL for relational data (user profiles, transactions) or MongoDB for flexibility with unstructured data. Redis can be used for caching to improve performance. Payment Gateways: APIs like Razorpay, Stripe, or PayPal for global transactions, and UPI-based solutions for markets like India. These ensure secure and fast payment processing. Cloud Infrastructure: AWS, Google Cloud, or Azure for hosting, storage, and scalability. Services like AWS Lambda can handle serverless computing for specific tasks. Security: SSL/TLS encryption for data in transit, AES-256 for data at rest, and OAuth 2.0 for authentication. Compliance with PCI-DSS standards is mandatory for financial apps. Real-Time Features: WebSocket or Firebase for push notifications and real-time transaction updates. DevOps Tools: Docker for containerization, Kubernetes for orchestration, and CI/CD pipelines (e.g., Jenkins or GitHub Actions) for streamlined deployment. This stack ensures the app is scalable, secure, and capable of handling millions of transactions. ## Development Process Building a Google Pay clone involves a structured development process: Market Research and Planning: Analyze user needs, target markets, and competitors like PayPal, Venmo, or PhonePe. Identify regulatory requirements, such as GDPR in Europe or RBI guidelines in India. UI/UX Design: Create a clean, intuitive interface inspired by Google Pay’s minimalistic design. Use wireframing tools like Figma to design layouts with easy navigation, vibrant visuals, and accessibility features. Backend Development: Develop APIs for user registration, authentication, payment processing, and transaction logging. Implement microservices architecture for modularity and scalability. Payment Gateway Integration: Connect with payment APIs to enable secure transactions. Test for edge cases, such as failed payments or network disruptions. Security Implementation: Integrate biometric authentication, multi-factor authentication, and encryption protocols. Conduct penetration testing to identify vulnerabilities. Testing: Perform unit testing (for individual components), integration testing (for API interactions), and user acceptance testing (to validate UX). Use tools like Selenium or Postman for automation. Deployment: Launch the app on Google Play Store and Apple App Store, ensuring compliance with platform guidelines. Use beta testing to gather user feedback before full release. Maintenance: Monitor performance using tools like New Relic, address bugs, and release updates to introduce new features or improve security. ## Challenges in Development Developing a Google Pay clone presents several challenges: Security: Financial apps are prime targets for cyberattacks. Implementing end-to-end encryption, secure APIs, and regular security audits is critical. Tokenization for contactless payments and secure storage of user credentials are non-negotiable. Regulatory Compliance: Adhering to financial regulations like PCI-DSS, GDPR, or local banking laws requires legal expertise. Non-compliance can lead to penalties or app bans. Scalability: The app must handle high transaction volumes, especially during peak times like festive seasons. Load balancing and auto-scaling cloud infrastructure are essential. User Trust: Building trust in a new app is challenging in a market dominated by established players. Transparent policies, robust customer support, and partnerships with reputed banks can help. Cross-Platform Compatibility: Ensuring consistent performance across Android, iOS, and various device specifications demands rigorous testing. Competition: Differentiating the app requires unique features, such as AI-driven financial insights or exclusive merchant partnerships. ## Monetization Strategies A Google Pay clone can generate revenue through: Transaction Fees: Charge a small percentage on peer-to-peer or merchant transactions. Premium Features: Offer subscriptions for advanced features like higher transaction limits or investment tracking. Merchant Partnerships: Collaborate with businesses for cashback programs or sponsored promotions. Ads: Display non-intrusive ads for financial products, ensuring they don’t disrupt the user experience. ## Future Scope and Innovations To stay competitive, a Google Pay clone can explore emerging trends: Cryptocurrency Integration: Support for Bitcoin or stablecoins could attract tech-savvy users. AI-Powered Insights: Use machine learning to provide personalized spending analytics or budgeting tips. IoT Integration: Enable payments via smart devices like wearables or IoT-enabled POS systems. Global Expansion: Adapt the app for multiple markets by supporting local payment systems and currencies. Sustainability Features: Partner with eco-friendly merchants or offer carbon offset options for transactions. ## Conclusion Building a **[google pay clone app](http://omninos.com/google-pay-app-clone-development/)** is a complex but rewarding endeavor. By replicating its core features, leveraging a modern tech stack, and addressing challenges like security and compliance, developers can create a competitive digital payment platform. With strategic monetization and innovative features, the app can carve a niche in the rapidly evolving fintech landscape, offering users a secure, convenient, and engaging payment experience.
sergioalves/aa903c68-dbe7-4ed4-9277-eb9fbd2dd847
sergioalves
2025-06-02T06:50:18Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM-1.7B-Instruct", "base_model:adapter:unsloth/SmolLM-1.7B-Instruct", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-02T06:12:52Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM-1.7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: aa903c68-dbe7-4ed4-9277-eb9fbd2dd847 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM-1.7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 1dd4933772d5cdfc_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_input: input field_instruction: instruct field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: sergioalves/aa903c68-dbe7-4ed4-9277-eb9fbd2dd847 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-07 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.2 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 300 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/1dd4933772d5cdfc_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1a691d6f-7ea3-4acf-967b-aaacc8816a63 wandb_project: s56-7 wandb_run: your_name wandb_runid: 1a691d6f-7ea3-4acf-967b-aaacc8816a63 warmup_steps: 30 weight_decay: 0.05 xformers_attention: true ``` </details><br> # aa903c68-dbe7-4ed4-9277-eb9fbd2dd847 This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 30 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9666 | 0.0001 | 1 | 1.7859 | | 1.9478 | 0.0158 | 150 | 1.7856 | | 1.8249 | 0.0315 | 300 | 1.7854 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
MaestrAI/elizabeth_carter-lora-1748846145
MaestrAI
2025-06-02T06:35:45Z
0
0
null
[ "region:us" ]
null
2025-06-02T06:35:44Z
# elizabeth_carter LORA Model This is a LORA model for character Elizabeth Carter Created at 2025-06-02 08:35:45
Unlearning/pythia1.5_blocklist_then_modernbert_filtered
Unlearning
2025-06-02T05:24:02Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-02T05:20:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
byungdoh/ssm-token-granularity
byungdoh
2025-06-02T03:32:37Z
0
0
null
[ "mamba-2", "token-classification", "en", "dataset:google/wiki40b", "arxiv:2412.11940", "license:apache-2.0", "region:us" ]
token-classification
2025-06-02T02:28:54Z
--- license: apache-2.0 datasets: - google/wiki40b language: - en pipeline_tag: token-classification tags: - mamba-2 --- # The Impact of Token Granularity on the Predictive Power of Language Model Surprisal ## Introduction This is the model repository for the paper [The Impact of Token Granularity on the Predictive Power of Language Model Surprisal](https://arxiv.org/pdf/2412.11940v2.pdf), featuring [Mamba-2 language models](https://github.com/state-spaces/mamba) trained on the English training section of the Wiki-40B dataset. Models of three different sizes (6_8_256, 12_16_512, 24_24_768) were trained on the same data tokenized using 11 different [unigram language model tokenizers](https://github.com/google/sentencepiece) (vocabulary sizes of 256, 512, 1k, 2k, 4k, 8k, 16k, 32k, 48k, 64k, 128k), resulting in a total of 33 models. The weights at both initialization ("_0") and after training ("_10063") are released. ## Companion Repository Please refer to the [companion GitHub repository](https://github.com/byungdoh/ssm-surprisal) for further instructions on how to load and use these models. ## Questions For questions or concerns, please contact Byung-Doh Oh ([oh.b@nyu.edu](mailto:oh.b@nyu.edu)).
CodeAtCMU/OLMo-2-0425-1B_full_sft_C_data_12K
CodeAtCMU
2025-06-02T00:30:32Z
0
0
transformers
[ "transformers", "safetensors", "olmo2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-02T00:29:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
earcherc/sophie400
earcherc
2025-06-01T20:46:50Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-06-01T20:44:19Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/2.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # sophie400 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/earcherc/sophie400/tree/main) them in the Files & versions tab.
houcha/distil-pretrain-common-vocab
houcha
2025-06-01T18:20:20Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "cobald_parser", "feature-extraction", "pytorch", "token-classification", "custom_code", "en", "dataset:houcha/enhanced-ud-syntax", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:gpl-3.0", "model-index", "region:us" ]
token-classification
2025-06-01T18:15:17Z
--- base_model: distilbert-base-uncased datasets: houcha/enhanced-ud-syntax language: en library_name: transformers license: gpl-3.0 metrics: - accuracy - f1 pipeline_tag: token-classification tags: - pytorch model-index: - name: houcha/distil-pretrain-common-vocab results: - task: type: token-classification dataset: name: enhanced-ud-syntax type: houcha/enhanced-ud-syntax split: validation metrics: - type: f1 value: 0.2499754084433992 name: Null F1 - type: accuracy value: 0.7287567409998542 name: Ud Jaccard - type: accuracy value: 0.545347206848203 name: Eud Jaccard --- # Model Card for distil-pretrain-common-vocab A transformer-based multihead parser for CoBaLD annotation. This model parses a pre-tokenized CoNLL-U text and jointly labels each token with three tiers of tags: * Grammatical tags (lemma, UPOS, XPOS, morphological features), * Syntactic tags (basic and enhanced Universal Dependencies), * Semantic tags (deep slot and semantic class). ## Model Sources - **Repository:** https://github.com/CobaldAnnotation/CobaldParser - **Paper:** https://dialogue-conf.org/wp-content/uploads/2025/04/BaiukIBaiukAPetrovaM.009.pdf - **Demo:** [coming soon] ## Citation ``` @inproceedings{baiuk2025cobald, title={CoBaLD Parser: Joint Morphosyntactic and Semantic Annotation}, author={Baiuk, Ilia and Baiuk, Alexandra and Petrova, Maria}, booktitle={Proceedings of the International Conference "Dialogue"}, volume={I}, year={2025} } ```
sizzlebop/AdaptThink-7B-delta0.05-IQ4_XS-GGUF
sizzlebop
2025-06-01T16:04:45Z
0
0
null
[ "gguf", "LRM", "hybrid_reasoning", "efficient_reasoning", "llama-cpp", "gguf-my-repo", "dataset:agentica-org/DeepScaleR-Preview-Dataset", "base_model:THU-KEG/AdaptThink-7B-delta0.05", "base_model:quantized:THU-KEG/AdaptThink-7B-delta0.05", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-01T16:04:19Z
--- license: mit datasets: - agentica-org/DeepScaleR-Preview-Dataset base_model: THU-KEG/AdaptThink-7B-delta0.05 tags: - LRM - hybrid_reasoning - efficient_reasoning - llama-cpp - gguf-my-repo --- # sizzlebop/AdaptThink-7B-delta0.05-IQ4_XS-GGUF This model was converted to GGUF format from [`THU-KEG/AdaptThink-7B-delta0.05`](https://huggingface.co/THU-KEG/AdaptThink-7B-delta0.05) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/THU-KEG/AdaptThink-7B-delta0.05) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sizzlebop/AdaptThink-7B-delta0.05-IQ4_XS-GGUF --hf-file adaptthink-7b-delta0.05-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sizzlebop/AdaptThink-7B-delta0.05-IQ4_XS-GGUF --hf-file adaptthink-7b-delta0.05-iq4_xs-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sizzlebop/AdaptThink-7B-delta0.05-IQ4_XS-GGUF --hf-file adaptthink-7b-delta0.05-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sizzlebop/AdaptThink-7B-delta0.05-IQ4_XS-GGUF --hf-file adaptthink-7b-delta0.05-iq4_xs-imat.gguf -c 2048 ```
amgule/meme-model
amgule
2025-06-01T15:02:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_vl", "trl", "en", "base_model:unsloth/Qwen2-VL-2B-Instruct", "base_model:finetune:unsloth/Qwen2-VL-2B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-01T11:23:56Z
--- base_model: unsloth/Qwen2-VL-2B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** amgule - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF
mradermacher
2025-06-01T09:33:25Z
405
1
transformers
[ "transformers", "gguf", "Math", "OCR", "Latex", "VLM", "Plain_Text", "KIE", "Equations", "VQA", "en", "dataset:unsloth/LaTeX_OCR", "dataset:linxy/LaTeX_OCR", "base_model:prithivMLmods/Qwen2-VL-OCR-2B-Instruct", "base_model:quantized:prithivMLmods/Qwen2-VL-OCR-2B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-01-12T17:25:33Z
--- base_model: prithivMLmods/Qwen2-VL-OCR-2B-Instruct datasets: - unsloth/LaTeX_OCR - linxy/LaTeX_OCR language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - Math - OCR - Latex - VLM - Plain_Text - KIE - Equations - VQA --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/prithivMLmods/Qwen2-VL-OCR-2B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.mmproj-fp16.gguf) | mmproj-fp16 | 1.4 | multi-modal supplement | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-OCR-2B-Instruct-GGUF/resolve/main/Qwen2-VL-OCR-2B-Instruct.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dimasik2987/39dff867-392a-42cc-9c07-844725f21b53
dimasik2987
2025-06-01T07:04:24Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-135M", "base_model:adapter:unsloth/SmolLM2-135M", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-06-01T06:51:21Z
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-135M tags: - axolotl - generated_from_trainer model-index: - name: 39dff867-392a-42cc-9c07-844725f21b53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-135M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 3c605547c76c2fd6_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.85 group_by_length: false hub_model_id: dimasik2987/39dff867-392a-42cc-9c07-844725f21b53 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 12 mixed_precision: bf16 mlflow_experiment_name: /tmp/3c605547c76c2fd6_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: da387006-8de8-420f-b514-faafbf147a79 wandb_project: s56-7 wandb_run: your_name wandb_runid: da387006-8de8-420f-b514-faafbf147a79 warmup_steps: 50 weight_decay: 0.02 xformers_attention: true ``` </details><br> # 39dff867-392a-42cc-9c07-844725f21b53 This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2388 | 0.0001 | 1 | 1.3723 | | 1.2783 | 0.0234 | 250 | 1.1766 | | 1.0416 | 0.0467 | 500 | 1.1526 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
RichardErkhov/MaziyarPanahi_-_T3qm7xNeuralsirkrishna_T3qM7-4bits
RichardErkhov
2025-05-31T20:43:37Z
0
0
null
[ "safetensors", "mistral", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-31T20:41:21Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) T3qm7xNeuralsirkrishna_T3qM7 - bnb 4bits - Model creator: https://huggingface.co/MaziyarPanahi/ - Original model: https://huggingface.co/MaziyarPanahi/T3qm7xNeuralsirkrishna_T3qM7/ Original model description: --- license: apache-2.0 tags: - Safetensors - text-generation-inference - merge model_name: T3qm7xNeuralsirkrishna_T3qM7 base_model: - automerger/T3qm7xNeuralsirkrishna-7B - automerger/T3qM7-7B inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # T3qm7xNeuralsirkrishna_T3qM7 T3qm7xNeuralsirkrishna_T3qM7 is a merge of the following models: * [automerger/T3qm7xNeuralsirkrishna-7B](https://huggingface.co/automerger/T3qm7xNeuralsirkrishna-7B) * [automerger/T3qM7-7B](https://huggingface.co/automerger/T3qM7-7B) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/T3qm7xNeuralsirkrishna_T3qM7" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MoatazSaleh/saka-14b-4bit
MoatazSaleh
2025-05-31T02:34:16Z
1
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "ar", "en", "base_model:Sakalti/Saka-14B", "base_model:quantized:Sakalti/Saka-14B", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-22T22:43:58Z
--- language: - ar - en base_model: - Sakalti/Saka-14B pipeline_tag: text-generation ---