Hugging Face logo

Changelog

Keep track of latest changes on the Hugging Face Hub

Jul 30, 25
Introducing HF Jobs: Run scalable compute jobs on Hugging Face

Hugging Face Jobs lets you effortlessly run compute tasks on our infrastructure—from simple scripts to large-scale workloads—with a simple CLI. Whether you need CPUs, or high-end GPUs, Jobs provides instant access to the hardware you need, billed by the second.

Quick Start Examples:

Run Python code directly:

hf jobs run python:3.12 python -c "print('Hello from the cloud!')"

Use GPUs without any setup:

hf jobs run --flavor=t4-small ubuntu nvidia-smi
Jul 28, 25
Trending Papers

The Daily Papers page now includes Trending Papers, showcasing the most popular and impactful research papers from the community, along with their corresponding code implementations on GitHub. Trending Papers are ranked based on recent GitHub star activity.

image/png

Jul 25, 25
Introducing a better Hugging Face CLI

We’ve renamed huggingface-cli to hf and overhauled the command structure for speed and clarity. The new CLI now uses the format hf <resource> <action>, so commands like hf auth login, hf repo create, or hf download Qwen/Qwen3-0.6B are now consistent and intuitive.

Migration is easy, the old CLI still works and will gently point you to the new commands.

You can learn more by reading the full announcement in this blog article.

image/png

Jul 18, 25
Inference Providers now fully support OpenAI-compatible API

Our Inference Providers service now fully supports the OpenAI-compatible API, making it easy to integrate with existing workflows. Plus, you can now specify the provider name directly in the model path for greater flexibility.

This means you can effortlessly switch between different inference providers while using the familiar OpenAI client—just point to router.huggingface.co/v1 and include the provider in the model name.

This update simplifies integration while maintaining full compatibility with OpenAI's SDK. Try it now with supported providers like novita, groq, together, and more!

Using Kimi K2 with Groq Inference Provider via OpenAI client