NVIDIA AI Releases Star Elastic: One Checkpoint that Contains 30B, 23B, and 12B Reasoning Models with Zero-Shot Slicing
NVIDIA researchers have introduced Star Elastic, a post-training method that embeds multiple nested reasoning models — at 30B, 23B, and 12B parameter scales — inside a single checkpoint, eliminating the need for separate training runs or stored model weights per variant. Built on the Nemotron Elastic framework and applied to Nemotron Nano v3, the method trains all three variants in a single 160B-token run, achieving a 360× token reduction compared to pretraining each model from scratch. Beyond training efficiency, Star Elastic introduces elastic budget control — a novel inference scheme that uses a smaller submodel for the thinking phase and the full model for the final answer — delivering up to 16% higher accuracy and 1.9× lower latency compared to standard budget control, while nested FP8 and NVFP4 checkpoints bring the full model family within reach of RTX-class GPUs. The post NVIDIA AI Releases Star Elastic: One Checkpoint that Contains 30B, 23B, and 12B Reasoning Models with Zero-