AI & ML interests

None defined yet.

Recent Activity

seawolf2357  updated a Space about 2 hours ago
Heartsync/FLUX-Vision
seawolf2357  updated a Space 3 days ago
Heartsync/FREE-NSFW-HUB
View all activity

Not working

6
#1 opened 5 days ago by
fremen1

New Updated

2
#1 opened about 1 month ago by
seawolf2357
aiqtech 
posted an update about 1 month ago
view post
Post
2495
🔥 HuggingFace Heatmap Leaderboard
Visualizing AI ecosystem activity at a glance

aiqtech/Heatmap-Leaderboard

🎯 Introduction
A leaderboard that visualizes the vibrant HuggingFace community activity through heatmaps.

✨ Key Features
📊 Real-time Tracking - Model/dataset/app releases from AI labs and developers
🏆 Auto Ranking - Rankings based on activity over the past year
🎨 Responsive UI - Unique colors per organization, mobile optimized
⚡ Auto Updates - Hourly data refresh for latest information

🌍 Major Participants
Big Tech: OpenAI, Google, Meta, Microsoft, Apple, NVIDIA
AI Startups: Anthropic, Mistral, Stability AI, Cohere, DeepSeek
Chinese Companies: Tencent, Baidu, ByteDance, Qwen
HuggingFace Official: HuggingFaceH4, HuggingFaceM4, lerobot, etc.
Active Developers: prithivMLmods, lllyasviel, multimodalart and many more

🚀 Value
Trend Analysis 📈 Real-time open source contribution insights
Inspiration 💪 Learn from other developers' activity patterns
Ecosystem Growth 🌱 Visualize AI community development

@John6666 @Nymbo @MaziyarPanahi @prithivMLmods @fffiloni @gokaygokay @enzostvs @black-forest-labs @lllyasviel @briaai @multimodalart @unsloth @Xenova @mistralai @meta-llama @facebook @openai @Anthropic @google @allenai @apple @microsoft @nvidia @CohereLabs @ibm-granite @stabilityai @huggingface @OpenEvals @HuggingFaceTB @HuggingFaceH4 @HuggingFaceM4 @HuggingFaceFW @HuggingFaceFV @open-r1 @parler-tts @nanotron @lerobot @distilbert @kakaobrain @NCSOFT @upstage @moreh @LGAI-EXAONE @naver-hyperclovax @OnomaAIResearch @kakaocorp @Baidu @PaddlePaddle @tencent @BAAI @OpenGVLab @InternLM @Skywork @MiniMaxAI @stepfun-ai @ByteDance @Bytedance Seed @bytedance-research @openbmb @THUDM @rednote-hilab @deepseek-ai @Qwen @wan-ai @XiaomiMiMo @IndexTeam @agents-course
@Agents-MCP-Hackathon @akhaliq @alexnasa @Alibaba-NLP
@ArtificialAnalysis @bartowski @bibibi12345 @calcuis
@ChenDY @city96 @Comfy-Org @fancyfeast @fal @google
  • 1 reply
·
seawolf2357 
posted an update about 2 months ago
view post
Post
1152
🚀 VEO3 Real-Time: Real-time AI Video Generation with Self-Forcing

🎯 Core Innovation: Self-Forcing Technology
VEO3 Real-Time, an open-source project challenging Google's VEO3, achieves real-time video generation through revolutionary Self-Forcing technology.

Heartsync/VEO3-RealTime

⚡ What is Self-Forcing?
While traditional methods require 50-100 steps, Self-Forcing achieves the same quality in just 1-2 steps. Through self-correction and rapid convergence, this Distribution Matching Distillation (DMD) technique maintains quality while delivering 50x speed improvement.

💡 Technical Advantages of Self-Forcing
1. Extreme Speed
Generates 4-second videos in under 30 seconds, with first frame streaming in just 3 seconds. This represents 50x faster performance than traditional diffusion methods.
2. Consistent Quality
Maintains cinematic quality despite fewer steps, ensures temporal consistency, and minimizes artifacts.
3. Efficient Resource Usage
Reduces GPU memory usage by 70% and heat generation by 30%, enabling smooth operation on mid-range GPUs like RTX 3060.

🛠️ Technology Stack Synergy
VEO3 Real-Time integrates multiple technologies organically around Self-Forcing DMD. Self-Forcing DMD handles ultra-fast video generation, Wan2.1-T2V-1.3B serves as the high-quality video backbone, PyAV streaming enables real-time transmission, and Qwen3 adds intelligent prompt enhancement for polished results.

📊 Performance Comparison
Traditional methods require 50-100 steps, taking 2-5 minutes for the first frame and 5-10 minutes total. In contrast, Self-Forcing needs only 1-2 steps, delivering the first frame in 3 seconds and complete videos in 30 seconds while maintaining equal quality.🔮 Future of Self-Forcing
Our next goal is real-time 1080p generation, with ongoing research to achieve
seawolf2357 
posted an update about 2 months ago
view post
Post
7065
⚡ FusionX Enhanced Wan 2.1 I2V (14B) 🎬

🚀 Revolutionary Image-to-Video Generation Model
Generate cinematic-quality videos in just 8 steps!

Heartsync/WAN2-1-fast-T2V-FusioniX

✨ Key Features
🎯 Ultra-Fast Generation: Premium quality in just 8-10 steps
🎬 Cinematic Quality: Smooth motion with detailed textures
🔥 FusionX Technology: Enhanced with CausVid + MPS Rewards LoRA
📐 Optimized Resolution: 576×1024 default settings
⚡ 50% Speed Boost: Faster rendering compared to base models
🛠️ Technical Stack

Base Model: Wan2.1 I2V 14B
Enhancement Technologies:

🔗 CausVid LoRA (1.0 strength) - Motion modeling
🔗 MPS Rewards LoRA (0.7 strength) - Detail optimization

Scheduler: UniPC Multistep (flow_shift=8.0)
Auto Prompt Enhancement: Automatic cinematic keyword injection

🎨 How to Use

Upload Image - Select your starting image
Enter Prompt - Describe desired motion and style
Adjust Settings - 8 steps, 2-5 seconds recommended
Generate - Complete in just minutes!

💡 Optimization Tips
✅ Recommended Settings: 8-10 steps, 576×1024 resolution
✅ Prompting: Use "cinematic motion, smooth animation" keywords
✅ Duration: 2-5 seconds for optimal quality
✅ Motion: Emphasize natural movement and camera work
🏆 FusionX Enhanced vs Standard Models
Performance Comparison: While standard models typically require 15-20 inference steps to achieve decent quality, our FusionX Enhanced version delivers premium results in just 8-10 steps - that's more than 50% faster! The rendering speed has been dramatically improved through optimized LoRA fusion, allowing creators to iterate quickly without sacrificing quality. Motion quality has been significantly enhanced with advanced causal modeling, producing smoother, more realistic animations compared to base implementations. Detail preservation is substantially better thanks to MPS Rewards training, maintaining crisp textures and consistent temporal coherence throughout the generated sequences.
  • 1 reply
·
seawolf2357 
posted an update 2 months ago
view post
Post
1598
🚀 Just Found an Interesting New Leaderboard for Medical AI Evaluation!

I recently stumbled upon a medical domain-specific FACTS Grounding leaderboard on Hugging Face, and the approach to evaluating AI accuracy in medical contexts is quite impressive, so I thought I'd share.

📊 What is FACTS Grounding?
It's originally a benchmark developed by Google DeepMind that measures how well LLMs generate answers based solely on provided documents. What's cool about this medical-focused version is that it's designed to test even small open-source models.

🏥 Medical Domain Version Features

236 medical examples: Extracted from the original 860 examples
Tests small models like Qwen 3 1.7B: Great for resource-constrained environments
Uses Gemini 1.5 Flash for evaluation: Simplified to a single judge model

📈 The Evaluation Method is Pretty Neat

Grounding Score: Are all claims in the response supported by the provided document?
Quality Score: Does it properly answer the user's question?
Combined Score: Did it pass both checks?

Since medical information requires extreme accuracy, this thorough verification approach makes a lot of sense.
🔗 Check It Out Yourself

The actual leaderboard: MaziyarPanahi/FACTS-Leaderboard

💭 My thoughts: As medical AI continues to evolve, evaluation tools like this are becoming increasingly important. The fact that it can test smaller models is particularly helpful for the open-source community!
seawolf2357 
posted an update 3 months ago
view post
Post
6332
Samsung Hacking Incident: Samsung Electronics' Official Hugging Face Account Compromised
Samsung Electronics' official Hugging Face account has been hacked. Approximately 17 hours ago, two new language models (LLMs) were registered under Samsung Electronics' official Hugging Face account. These models are:

https://huggingface.co/Samsung/MuTokenZero2-32B
https://huggingface.co/Samsung/MythoMax-L2-13B

The model descriptions contain absurd and false claims, such as being trained on "1 million W200 GPUs," hardware that doesn't even exist.
Moreover, community participants on Hugging Face who have noticed this issue are continuously posting that Samsung Electronics' account has been compromised.
There is concern about potential secondary and tertiary damage if users download these LLMs released under the Samsung Electronics account, trusting Samsung's reputation without knowing about the hack.
Samsung Electronics appears to be unaware of this situation, as they have not taken any visible measures yet, such as changing the account password.
Source: https://discord.gg/openfreeai
  • 2 replies
·
seawolf2357 
posted an update 4 months ago
view post
Post
5865
📚 Papers Leaderboard - See the Latest AI Research Trends at a Glance! ✨

Hello, AI research community! Today I'm introducing a new tool for exploring research papers. Papers Leaderboard is an open-source dashboard that makes it easy to find and filter the latest AI research papers.

Heartsync/Papers-Leaderboard

🌟 Key Features

Date Filtering: View only papers published within a specific timeframe (from May 5, 2023 to present)
Title Search: Quickly find papers containing your keywords of interest
Abstract Search: Explore paper content more deeply by searching for keywords within abstracts
Automatic Updates: The database is updated with the latest papers every hour

💡 How to Use It?

Select a start date and end date
Enter keywords you want to find in titles or abstracts
Adjust the maximum number of search results for abstract searches
Results are displayed neatly in table format
aiqtech 
posted an update 4 months ago
view post
Post
4821
🌐 AI Token Visualization Tool with Perfect Multilingual Support

Hello! Today I'm introducing my Token Visualization Tool with comprehensive multilingual support. This web-based application allows you to see how various Large Language Models (LLMs) tokenize text.

aiqtech/LLM-Token-Visual

✨ Key Features

🤖 Multiple LLM Tokenizers: Support for Llama 4, Mistral, Gemma, Deepseek, QWQ, BERT, and more
🔄 Custom Model Support: Use any tokenizer available on HuggingFace
📊 Detailed Token Statistics: Analyze total tokens, unique tokens, compression ratio, and more
🌈 Visual Token Representation: Each token assigned a unique color for visual distinction
📂 File Analysis Support: Upload and analyze large files

🌏 Powerful Multilingual Support
The most significant advantage of this tool is its perfect support for all languages:

📝 Asian languages including Korean, Chinese, and Japanese fully supported
🔤 RTL (right-to-left) languages like Arabic and Hebrew supported
🈺 Special characters and emoji tokenization visualization
🧩 Compare tokenization differences between languages
💬 Mixed multilingual text processing analysis

🚀 How It Works

Select your desired tokenizer model (predefined or HuggingFace model ID)
Input multilingual text or upload a file for analysis
Click 'Analyze Text' to see the tokenized results
Visually understand how the model breaks down various languages with color-coded tokens

💡 Benefits of Multilingual Processing
Understanding multilingual text tokenization patterns helps you:

Optimize prompts that mix multiple languages
Compare token efficiency across languages (e.g., English vs. Korean vs. Chinese token usage)
Predict token usage for internationalization (i18n) applications
Optimize costs for multilingual AI services

🛠️ Technology Stack

Backend: Flask (Python)
Frontend: HTML, CSS, JavaScript (jQuery)
Tokenizers: 🤗 Transformers library
·
seawolf2357 
posted an update 4 months ago
view post
Post
6739
🔥 AgenticAI: The Ultimate Multimodal AI with 16 MBTI Girlfriend Personas! 🔥

Hello AI community! Today, our team is thrilled to introduce AgenticAI, an innovative open-source AI assistant that combines deep technical capabilities with uniquely personalized interaction. 💘

🛠️ MBTI 16 Types SPACES Collections link
seawolf2357/heartsync-mbti-67f793d752ef1fa542e16560

✨ 16 MBTI Girlfriend Personas

Complete MBTI Implementation: All 16 MBTI female personas modeled after iconic characters (Dana Scully, Lara Croft, etc.)
Persona Depth: Customize age groups and thinking patterns for hyper-personalized AI interactions
Personality Consistency: Each MBTI type demonstrates consistent problem-solving approaches, conversation patterns, and emotional expressions

🚀 Cutting-Edge Multimodal Capabilities

Integrated File Analysis: Deep analysis and cross-referencing of images, videos, CSV, PDF, and TXT files
Advanced Image Understanding: Interprets complex diagrams, mathematical equations, charts, and tables
Video Processing: Extracts key frames from videos and understands contextual meaning
Document RAG: Intelligent analysis and summarization of PDF/CSV/TXT files

💡 Deep Research & Knowledge Enhancement

Real-time Web Search: SerpHouse API integration for latest information retrieval and citation
Deep Reasoning Chains: Step-by-step inference process for solving complex problems
Academic Analysis: In-depth approach to mathematical problems, scientific questions, and data analysis
Structured Knowledge Generation: Systematic code, data analysis, and report creation

🖼️ Creative Generation Engine

FLUX Image Generation: Custom image creation reflecting the selected MBTI persona traits
Data Visualization: Automatic generation of code for visualizing complex datasets
Creative Writing: Story and scenario writing matching the selected persona's style

  • 1 reply
·
seawolf2357 
posted an update 4 months ago
view post
Post
8446
🎨 Ghibli-Style Image Generation with Multilingual Text Integration: FLUX.1 Hugging Face Edition 🌏✨

Hello creators! Today I'm introducing a special image generator that combines the beautiful aesthetics of Studio Ghibli with multilingual text integration! 😍

seawolf2357/Ghibli-Multilingual-Text-rendering

✨ Key Features

Ghibli-Style Image Generation - High-quality animation-style images based on FLUX.1
Multilingual Text Rendering - Support for Korean, Japanese, English, and all languages! 🇰🇷🇯🇵🇬🇧
Automatic Image Editing with Simple Prompts - Just input your desired text and you're done!
Two Stylistic Variations Provided - Get two different results from a single prompt
Full Hugging Face Spaces Support - Deploy and share instantly!

🚀 How Does It Work?

Enter a prompt describing your desired image (e.g., "a cat sitting by the window")
Input the text you want to add (any language works!)
Select the text position, size, and color
Two different versions are automatically generated!

💯 Advantages of This Model

No Tedious Post-Editing Needed - Text is perfectly integrated during generation
Natural Text Integration - Text automatically adjusts to match the image style
Perfect Multilingual Support - Any language renders beautifully!
User-Friendly Interface - Easily adjust text size, position, and color
One-Click Hugging Face Deployment - Use immediately without complex setup

🎭 Use Cases

Creating multilingual greeting cards
Animation-style social media content
Ghibli-inspired posters or banners
Character images with dialogue in various languages
Sharing with the community through Hugging Face Spaces

This project leverages Hugging Face's FLUX.1 model to open new possibilities for seamlessly integrating high-quality Ghibli-style images with multilingual text using just prompts! 🌈
Try it now and create your own artistic masterpieces! 🎨✨

#GhibliStyle #MultilingualSupport #AIImageGeneration #TextRendering #FLUX #HuggingFace
·