HuggingFace just made fine-tuning 10x easier! One line of English to fine-tune any open-source LLM. They released a new "skill" you can plug into Claude or any coding agent. It doesn't just write training scripts, but actually submits jobs to cloud GPUs, monitors progress, and pushes finished models to the Hub. Here's how it works: You say something like: "Fine-tune Qwen3-0.6B on the open-r1/codeforces-cots dataset" And Claude will: ↳ Validate your dataset format ↳ Select appropriate GPU hardware ↳ Submit the job to Hugging Face Jobs ↳ Monitor training progress ↳ Push the finished model to the Hub The model trains on Hugging Face GPUs while you do other things. When it's done, your fine-tuned model appears on the Hub, ready to use. This isn't a toy demo. The skill supports production training methods: SFT, DPO, and GRPO. You can train models from 0.5B to 70B parameters, convert them to GGUF for local deployment, and run multi-stage pipelines. A full training run on a small model costs only about $0.30. ...