Through a recent update, Google has optimized Google Translate by reducing the file size for each language to only 35 MB as well as improving how it translates content. Google wants to allow more people to access its artificial intelligence-based translator offline. Available in 59 languages, the new Google Translate makes fewer translation errors than in previous versions, thanks to an algorithm that takes into account the entire sentence, rather than translating word by word.

Offline mode, as acclaimed by users

As an essential tool of travelers and readers of multilingual content worldwide, Google Translate wants to appear on as many smartphones as it can. It must, however, fulfill one particular criteria: to be accessible offline. Not everyone has easy access to mobile internet, especially if they are traveling abroad or away from home. Offline availability is one of the most popular requests from users, according to a study by Julie Cattiau, French project manager at Google Translate.

Since the latest Google Translate update in June 1st 2018, users users can now download small modules of 35 MB, one for each of the 59 languages available. The languages range from the most common — English, French, Spanish — to the most exotic — Galician, Tamil, Swahili. These small modules if information can be stored in the memory or SD card of any smartphone on the market.

Sentence by sentence translation (rather than word-for-word)

Google Translate also delivers better results. The new algorithm now uses the resources belonging to an Artificial Intelligence engine developed by the Google Brain deep learning team. Google Translate can now analyze the sentence as a whole, in order to translate as accurately as possible. This does signal the end for ‘word-for-word’ translations, which made it possible to get by when translating between two similar grammatically structured languages – but quickly became gibberish when two idioms were far apart.

Note that the new algorithm does not work yet for handwritten sentence translations, or those captured in augmented reality through a phone camera via the app.

Nadine Vitalis

Source: https://www.clubic.com/pro/entreprises/google/actualite-844043-google-traduction-ia-telechargeable-fonctionner-hors-ligne.html

Could it really be? A Breakthrough after decades of research in machine translation? Microsoft have recently stated that their machine translation research team has reached ‘human translation quality’. This leads to the question: Will translators soon be replaced by machines? This article will shed light on the future role of human translators and linguists in the era of artificial intelligence (AI) and neural machine translation (NMT).

Quality of neural machine translation output

If you continue to read Microsoft’s article, you will learn that their claim only applies from Chinese to English translations of news articles. Researchers don’t even know whether human parity can be reached for every language pair. Other experts question the assessment of machine translation quality. MT assessment focuses strongly on sentences without their context. The use of anaphora such as pronouns is not assessed correctly. Sometimes low quality human translations serve as reference for the assessment. Currently, NMT only seems to be yielding its best results under artificial conditions when the context is controlled.

Current implementation of artificial intelligence

Could there be a more realistic view of AI and NMT in the language industry? Researchers predict that translators will be able to use machine translation in their daily life as a work tool which will perform repetitive tasks. This means that machines will carry out routine tasks. However, some would argue that this is already happening today!

Artificial intelligence is already part of various CAT (Computer Assisted Translation) tools in the form of:

  • predictive typing functions and vocabulary suggestions from previously translated content.
  • fragments of sentences which are automatically inserted or suggested with accuracy up to 100%, as well as ‘fuzzy matches’ from the translation memory bank.
  • machine translation engines can be integrated into CAT tools through API.

Nowadays, translators already work intensively with artificial intelligence. Post-editing of MT output is part of most translators’ daily life, but often they are unable influence the process and its outcomes.

More interaction between human translators and tools

How will AI change the work of human translators in the future?

Researchers predict that technology and NMT will play a key role in the future of translators and linguists. Based in Massachusets, USA, ‘The Common Sense Advisory’ team have been developing a concept called “Augmented Translation.” Just as “Augmented Reality”  has its applications, this concept will make relevant information accessible to translators when they need it. The translators will be in the center of various technologies. Some of them are already in use, some of them are currently awaiting validation by respective governing bodies.

Besides technologies such as translation memory and terminology management, translators will benefit from adaptive neural machine translation and automated content enrichment.

But what are the benefits from adaptive NMT and automated content enrichment (ACE)?

Adaptive NMT learns from the feedback the translator inputs. It adapts to the translator’s writing style, automatically learns terminology and works on a sub-segment level. This means that linguists can actively influence the translation suggestions the system provides.

Automated content enrichment helps the translator by giving information on ambiguous words and helping translators to localize a variety of content to numerous cultures. It is strongly connected with terminology management, as it searches through a terminology database.

Within the concept of “Augmented Translation,” translators will have instant access to all relevant information needed when translating, meaning that they won’t have to look up the previous translation of a word which is not in the terminology, they won’t have to disambiguate words using different dictionaries and countless Internet searches. Translators will finally be able to influence the suggestions of neural machine translation systems. Terminology systems and the contained metadata are only set to gain in importance as more and more important translation and localisation several gain access to it.

If you want to learn more about artificial intelligence, you may find this video about Google’s Deep Mind inspiring!

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCloc Master’s Programme? 

Click HERE to visit the homepage.

Gone are the days where you would break out in a cold sweat when receiving an email written in a foreign natural language. Now you simply open your preferred web browser, call up your favorite machine translation app and, hey presto, you have a somewhat comprehensible translation at your fingertips.

We tend to take machine translation for granted in our frenetic, capitalist society. It is easy to forget that countless individuals have put in many thousands of people-hours just to enable computers to learn in the first place. So, let’s have a closer look at how machines do this.

One might be forgiven for believing that computers learn in much the same way as children do. There is some truth to this assumption, since machine learning is broadly classified into three distinct approaches, namely supervised, unsupervised and reinforcement learning. All three of these approaches rely heavily on complex mathematical algorithms.

We tend to take machine translation for granted in our frenetic, capitalist society. It is easy to forget that countless individuals have put in many thousands of people-hours just to enable computers to learn in the first place. So, let’s have a closer look at how machines do this.

One might be forgiven for believing that computers learn in much the same way as children do. There is some truth to this assumption, since machine learning is broadly classified into three distinct approaches, namely supervised, unsupervised and reinforcement learning. All three of these approaches rely heavily on complex mathematical algorithms.

Machine learning and natural language processing

As is the case in any kindergarten or primary school, children are taught by teachers. The role of the teacher is to guide children in their learning experience, making it as effective as possible. In some cases, the same can be said for computers, however, here the teacher is a developer that feeds the computer custom designed algorithms or carefully selected data sets. This form of learning is, unsurprisingly, called supervised machine learning.

Similarly, there comes a time in every child’s life where he or she starts relying on his or her own experiences to discover the world. This is where unsupervised learning kicks in. In fact, human beings are hardwired for unsupervised learning. The first words a child utters are often not what mom and dad expected, sometimes provoking laughter or even shock. Computers can also learn in this way.

Lastly, children learn from their mistakes. This lesson is not lost on machine learning developers, who use a similar approach to help computers generate learning algorithms. It is worth noting that the most effective machine learning strategies often involve a balanced combination of all three approaches.

So why is it taking computers so long to really master natural language? Well, maybe we should look for answers closer to home, since human language is far from perfect. In fact, human language is inefficient in communicating information in several respects, which include ambiguity and general vagueness. Computers don’t enjoy vagueness, since all programming languages are based on the rigorous certainties of math, statistics and logic.

It would therefore come as a surprise that Python, one of the best programming languages for dealing with mathematical modelling, is also well suited for getting to grips with natural language processing. In fact, Python currently provides a whole natural language toolkit, also known as NLTK, for dealing with our distressingly complicated human languages.

The difficulties of natural language processing

During a recent lecture at the University of Strasbourg by François Massion, a computational linguist with many years of industry experience, pointed out that natural language processing can involve multidimensional calculations which include up to three hundred variables. Massion’s assertion serves to indicate just how dense and complicated natural languages really are for computers to deal with.

Unravelling the vast tapestry of human language is therefore a formidable and ambitious task. The development of artificial intelligence in the realm of natural language processing is still very much in its frontier years, and much remains to be discovered. For this reason, technical communicators, linguists and translators should not run away from AI.

Dystopian conceptions of the technology are largely to blame for the current apprehension felt by translators, who sometimes feel that machine translation is destroying their very reason for professional existence.

What natural language technology promises us

Admittedly, machine translation will continue to advance, but translators would be wise to use the technology to their advantage, rather than competing with it. In fact, developments in machine learning will have a significant impact on the way we all work and do business in the future. To quote Stephen Hawking in a 2016 interview with The Guardian newspaper, artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”.