Bending space to match energy: how geometry gets molecular structure prediction to chemical accuracy Predicting the 3D structure of a molecule—where exactly each atom sits in space—is fundamental to computational chemistry. Get it wrong by a little, and your energy calculations can be off by a lot. The gold standard is density functional theory, but DFT is slow and expensive. Machine learning offers a faster route: train a model to denoise a rough initial guess into an accurate structure. The problem is that most denoising models operate in ordinary Euclidean space, where all directions are treated equally. But molecules don't work that way. Stretching a bond costs far more energy than rotating around it. Equal distances in Cartesian coordinates don't mean equal energy changes. Jeheon Woo and coauthors address this mismatch directly. They construct a Riemannian manifold—a curved space with a position-dependent metric—designed so that geodesic distance correlates with energy difference. The metric is built from physics-informed internal coordinates that weight interatomic distances by how much energy it costs to change them: stiff bonds count more than soft torsions. When they compare geodesic distance against standard RMSD, the correlation with energy jumps from 0.37 to 0.90. Training a denoising model on this curved space changes what the model learns. In Euclidean space, adding isotropic noise can break bonds or create impossible geometries—structures hundreds of kcal/mol above the minimum. On the Riemannian manifold, the same noise magnitude keeps molecules chemically sensible, staying within the same potential well. The denoising path itself follows geodesics that track energy minimization, not arbitrary straight lines through Cartesian space. The results hit the threshold that matters: chemical accuracy, defined as energy error below 1 kcal/mol. On the QM9 benchmark, the Riemannian model achieves a median error of 0.177 kcal/mol—roughly 20× better than force-field starting structures and significantly better than the Euclidean version. When these predictions are used as starting points for DFT refinement, computational cost drops by more than half. The deeper point: in molecular modeling, the geometry of representation space isn't neutral. Euclidean space treats all atomic displacements as equivalent; Riemannian space can encode the physics. When you align geometric distance with energetic cost, denoising becomes optimization, and the model learns to follow the potential energy surface rather than fight it. Paper: