OpenAI's S2S preview is polished but it still thinks in steps. Speech → text → model → text → speech. That's not how humans converse. Introducing Hydra. A native speech-to-speech model that doesn't wait for turn-taking, doesn't flatten emotion into text, and doesn't break when you interrupt it mid-sentence. Hydra reasons asynchronously, speaks and listens simultaneously, and preserves emotion because it never leaves the audio domain. It's still in beta, but the shift is obvious. If you want early access, the link is in the comments. Here's a preview of what that looks like -