Interviews, insight & analysis on digital media & marketing

Machine Translation software is misunderstood, finds research

A new study on the accuracy of AI and Machine Translation (MT) software has shown the tools are more accurate at translating written text than people might think – in some cases requiring zero edits from professional linguists.

A global shortage of skilled interpreters is continuing to drive the development of AI and Machine translations, with Meta recently announcing its own AI translator tool for more than 200 languages. However, there is still a reticence to use MT for marketing content because of often unproven prejudices that have built over the years in the localisation industry.

The aim of the research was to debunk those common myths and prejudices. Conducted by Weglot and language consultants Nimdzi, the research evaluated and compared five of the leading Machine Translation providers – Amazon Translate, DeepL, Google Cloud, Microsoft Translator, and ModernMT.

Commenting on the motivations behind the research, Augustin Prot, CEO at Weglot, said: “We wanted to test the leading Machine Translation tools with marketing content and languages like Arabic and Chinese – which are often avoided because of the alleged lower translation quality.”

The MT tools were tested on their accuracy and reliability in translating 168 different segments containing more than 1,000 different words from American English into French, German, Spanish, Simplified Chinese, Arabic, and European Portuguese.

Reviewed by professional linguists, 85% of the 14 translations were scored as ‘Very good’ or ‘Acceptable’, with none of the Machine Translated material scored as ‘Very bad’.

Italian was the most difficult language to translate with an average acceptability score of 2.6, while German scored the highest at 3.4. The rest of the scores include Spanish (3.2), Portuguese (3), Arabic (3), French (2.9), and Simplified Chinese (2.8).

Of the 168 different word segments tested on the software, German again came out on top, with 145 sections not requiring any edits from the professional linguists after being translated, compared to Portuguese which had just 58 unedited sections.

However, the simple ampersand (&) proved to be a recurring problem for the MT tools, while there was also confusion between Brazilian and European Portuguese as well as contextual and punctuation issues. 10 out of the 14 total reviews of the quality of the translations were scored as being “positively surprised” by the two professional linguists, with the MT output of better quality than originally expected.

Prot said: “An estimated 99% of translations globally are not done by professional human translators – simply because there’s not enough time in the world. As such, the volume of machine translated content has skyrocketed over the past couple of years, and we only expect to see this further  as the technology develops and matures.”

You can access the report in full here.