Skip to content

Faster Machine Translation Ensembling with Reinforcement Learning and Competitive Correction

Abstract

Ensembling neural machine translation (NMT) models to produce higher-quality translations than the $L$ individual models has been extensively studied. Recent methods typically employ a candidate selection block (CSB) and an encoder-decoder fusion block (FB), requiring inference across textit{all} candidate models, leading to significant computational overhead, generally $Omega(L)$. This paper introduces textbf{SmartGen}, a reinforcement learning (RL)-based strategy that improves the CSB by selecting a small, fixed number of candidates and identifying optimal groups to pass to the fusion block for each input sentence. Furthermore, previously, the CSB and FB were trained independently, leading to suboptimal NMT performance. Our DQN-based textbf{SmartGen} addresses this by using feedback from the FB block as a reward during training. We also resolve a key issue in earlier methods, where candidates were passed to the FB without modification, by introducing a Competitive Correction Block (CCB). Finally, we validate our approach with extensive experiments on English-Hindi translation tasks in both directions.

View PDF

Authors

  • Kritarth Prasad
  • Mohammadi Zaki
  • Pratik Singh
  • Pankaj Wasnik

Venue

Kritarth Prasad, Mohammadi Zaki, Pratik Singh, Pankaj Wasnik, 2025

Date

2025

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation