High-Fidelity Simultaneous Speech-to-Speech Translation

https://news.ycombinator.com/rss Hits: 10
Summary

[Submitted on 5 Feb 2025 (v1), last revised 26 Feb 2025 (this version, v2)] Title:High-Fidelity Simultaneous Speech-To-Speech Translation View a PDF of the paper titled High-Fidelity Simultaneous Speech-To-Speech Translation, by Tom Labiausse and 5 other authors View PDF HTML (experimental) Abstract:We introduce Hibiki, a decoder-only model for simultaneous speech translation. Hibiki leverages a multistream language model to synchronously process source and target speech, and jointly produces text and audio tokens to perform speech-to-text and speech-to-speech translation. We furthermore address the fundamental challenge of simultaneous interpretation, which unlike its consecutive counterpart, where one waits for the end of the source utterance to start translating, adapts its flow to accumulate just enough context to produce a correct translation in real-time, chunk by chunk. To do so, we introduce a weakly-supervised method that leverages the perplexity of an off-the-shelf text translation system to identify optimal delays on a per-word basis and create aligned synthetic data. After supervised training, Hibiki performs adaptive, simultaneous speech translation with vanilla temperature sampling. On a French-English simultaneous speech translation task, Hibiki demonstrates state-of-the-art performance in translation quality, speaker fidelity and naturalness. Moreover, the simplicity of its inference process makes it compatible with batched translation and even real-time on-device deployment. We provide examples as well as models and inference code. Submission history From: Neil Zeghidour [view email] [v1] Wed, 5 Feb 2025 17:18:55 UTC (711 KB) [v2] Wed, 26 Feb 2025 09:31:58 UTC (711 KB)

First seen: 2025-07-03 21:09

Last seen: 2025-07-04 06:10