Qwen 3 Coder is a personal attack to k2, and I love it. It achieves near SOTA on LCB while not having reasoning. Finally people are understanding that reasoning isnt necessary for high benches...
AMD summer hackathons are here! A chance to get hands-on with MI300X GPUs and accelerate models. 🇫🇷 Paris - Station F - July 5-6 🇮🇳 Mumbai - July 12-13 🇮🇳 Bengaluru - July 19-20
Hugging Face and GPU Mode will be on site and on July 6 in Paris @ror will share lessons learned while building new kernels to accelerate Llama 3.1 405B on ROCm
Wrapping up a week of shipping and announcements with Dell Enterprise Hub now featuring AI Applications, on-device models for AI PCs, a new CLI and Python SDK... all you need for building AI on premises!
Seed-Coder released and it's designed for coding tasks, featuring base, instruct, and reasoning variants at an 8B parameter scale developed by ByteDance Seed team. Unlike traditional open source LLMs that rely on human crafted rules or annotated data for curating code pretraining datasets Seed-Coder introduces a model-centric data pipeline. The pipeline processes raw data from GitHub and web archives into four categories: file-level codes, repository-level codes, GitHub commits, and code-related web data.A quality filter LLM, evaluates code (for readability, modularity, clarity, and reusability) by removing the lowest 10% to create a 6 trillion token dataset supporting 89 programming languages. Models: ByteDance-Seed/seed-coder-680de32c15ead6555c75b0e4 Github: https://github.com/ByteDance-Seed/Seed-Coder/tree/master Paper: https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf
Microsoft released their new fine-tuned phi-4 models with reasoning data yesterday. They outperform/rival much larger models . Check out them if you haven't yet. 🚀
Expansion of Global and Dense Open Embeddings Dataset of Earth 🌍
We updated our previous embeddings release with three models MMEarth and DeCUR-S2, DeCUR-S1 of the Major TOM embeddings dataset, developed in collaboration with CloudFerro S.A. asterisk labs and Φ-lab, European Space Agency - ESA. Together with @mikonvergence , Jędrzej S. Bojanowski, we extend the open-access collection of open dataset of Copernicus embeddings built at global scale, providing dense coverage across the entire acquisition area of Sentinel-1 and Sentinel-2 sensors.
Total embedding resources after the update: - 51 TB of AI-embeddings generated from processed Sentinel data, - over 40 billion embedding vectors, - processing of 147 TB of raw satellite data, - analysis covering more than 15 million Sentinel-1 and Sentinel-2 scenes and more than 16 trillion pixels.
This project delivers open and free vectorized expansions of Major TOM datasets available on CREODIAS and Hugging Face, setting a new standard for embedding releases and enabling lightweight, scalable ingestion of Earth Observation (EO) data for countless applications.
✅ Pre-trained 119 languages(36 trillion tokens) and dialects with strong translation and instruction following abilities. (Qwen2.5 was pre-trained on 18 trillion tokens.) ✅Qwen3 dense models match the performance of larger Qwen2.5 models. For example, Qwen3-1.7B/4B/8B/14B/32B perform like Qwen2.5-3B/7B/14B/32B/72B. ✅ Three stage done while pretraining: • Stage 1: General language learning and knowledge building. • Stage 2: Reasoning boost with STEM, coding, and logic skills. • Stage 3: Long context training ✅ It supports MCP in the model ✅ Strong agent skills ✅ Supports seamless between thinking mode (for hard tasks like math and coding) and non-thinking mode (for fast chatting) inside chat template. ✅ Better human alignment for creative writing, roleplay, multi-turn conversations, and following detailed instructions.
FlowReasoner is a new system that builds a custom set of small AI agents for every user question. Unlike search based methods it uses reasoning driven optimization with external execution feedback.
✅ First, it distills reasoning data using DeepSeek R1-671B to build multi agent systems. 🤖 ✅ Then, reasoning data used for DeepSeek-R1-Distill-Qwen-7B via supervised fine tuning for basic reasoning skills. 💡 ✅ Finally, RL with GRPO (optimizes by comparing response groups from queries/tasks) to improve reasoning.