Thank you for TranslateGemma model @GoogleDeepMind. This is the 4B 4bit quantized version running very nicely on mobile with MLX Swift.