From tablets, interactive whiteboards, and MOOCs to educational software and games, digital technology is increasingly taking over European schools. Some people pin great hopes on new educational technology to offer students more personalized learning and improve their cognitive performance. Beside, others are concerned by the effects of too much screen time on children’s brain health and development.

Therefore, we must ask the detractors of instructional technology and school 2.0, can education really afford to ignore digital technology while it’s playing an increasingly bigger role in our lives? On the other hand, to its supporters, we ask: how certain are we that these new educational tools truly benefit our children?

European schools in the age of the digital revolution

Technological advances such as artificial intelligence are changing job markets around the world. In order to enable our children to adapt to this societal change, it seems essential to train them to use the tools of tomorrow. Some countries support this idea by implementing digital technology in the classroom.

However, the rates of access to digital equipment and digital skills acquisition are still unevenly distributed across Europe. According to this report from the European Commission about Digital Education, nordic countries currently have the highest level of equipment. On the French side, schools have also joined the movement, albeit a bit late. The digital plan launched by François Hollande in 2015 called for all high school students to be equipped with tablets by 2018. Today, the objectives are not being met and primary schools are even less well equipped than secondary schools.

Alternatives to the digital plan

Even though educational technology makes it possible for students to stay up to date in the 21st century, some skeptics still cling to the idea of a non-digital school, more “human” in their eyes. A study conducted by a university in Michigan, USA, shows that American high school students are less empathetic (by about 40%) than those of the 1980s and 1990s. Some of the researchers attribute this result to students’ overexposure to screens.

This is also the opinion of the Steiner-Waldorf schools, which are located throughout Europe. They prefer to avoid any contact with new technologies and place artistic creation, such as drawings, musical practices, and theatre, at the center of their pedagogy. Tablets and computers are not allowed in class until high school. According to them, it is creativity and empathy that will enable tomorrow’s future adults to better adapt to the profound changes in society. 

How can educational technology help students?

According to the latest PIRLS survey (Progress in International Reading Literacy), the performance of French fourth graders in reading comprehension is among the worst in Europe. Teams of researchers are now developing different solutions to improve children’s learning of these fundamentals and directly influence their speed and ease of learning.

Software that strengthens the cognitive pillars of learning

In Caen, a team of teachers has been experimenting with educational technology based on neuroscience since 2016. One software program in particular is the Stroop game. The child must select the color of the ink in which the word is written, while the word itself designates a different color (for example, the yellow word written in blue). This game is supposed to stimulate children’s working memory.

Another example is the “serious game” ELAN, a game for learning to read which has been tested in first-grade classes in Poitiers since September 2016. Immersed in a playful environment, children hear several phonemes and have to choose the corresponding graphemes repeatedly during the game. According to the researchers, video game support would trigger fundamental learning mechanisms in children such as concentration, engagement in action, correction of mistakes, and response automation.

Adaptive software for personalized learning

The French company Domoscio has recently developed an educational software that can be adapted to the students’ profiles. With each exercise, the software studies the students’ answers, errors, and even their hesitation. It will then be able to offer them learning paths more adapted to their levels and modes of reasoning. This method has already been used for several years in the US, notably by the educational startup Altschool in its network of micro-schools created by a former Google executive in 2013, where courses are organized in the form of adaptive playlists.

Another example is the Khan Academy, pioneer of the MOOCs (Massive Open Online Courses), which thought it would revolutionize the educational field by launching its platform in 2006. It turned out to be a failed revolution, as nearly 90% of registered users give up and do not go through with their educational playlists. In virtual environments lacking human interaction, students often end up feeling isolated and unmotivated. A recent PISA (Program for International Student Assessment) survey by the Organization for Economic Co-operation and Development (OECD) shows bad news for school 2.0. This survey examined the correlation between the frequency of use of new technologies in the classroom and school performance. The results are rather negative: the more screens are used, the less successful students are. It seems that instructional technology does not yet improve children’s cognitive performance and academic results. 

However, school 2.0 is still in its infancy. With the advances in neuroscience, it will be necessary to define which types of learning are best suited to digital education in order to readjust educational technology tools and make them more efficient. The challenge will be to find the right balance between creative and digital workshops, between artificial and human interactions, between technology and empathy.

What do you think about this issue? Have you experienced the use of new technologies in the classroom? In your opinion, is educational technology boon or bane? Tell us about your experience in the comments! To read other articles about artificial intelligence, don’t forget to visit our blog!

More and more companies are using artificial intelligence to save time in their recruitment process. Its advanced information technology helps them find the perfect candidates more quickly. It may also be a great opportunity for jobseekers as it broadens their job perspectives and proposes job offers that are more relevant to their profile.

What are the effects of artificial intelligence on the job search?

Don’t you ever get tired of spending hours reviewing thousands of job offers before finding the perfect job? Artificial intelligence might be the solution to this problem. Indeed, the recent progress made in the information technology field is rather promising. For example, websites such as Jobijoba are able, thanks to their sophisticated search engines, to sort through and select job offers according to, not only your past experiences and education, but also your soft skills and personality traits. This precise technology helps you find relevant job offers suited to your profile and aspirations.

In addition to saving you a lot of time, these search engines can show you job offers that you would never have thought of applying for. By detecting your personal qualities and abilities, the search engine will find job offers that correspond to your personality and not necessarily to your professional field. For example, if you have had many positions with leadership responsibilities in the past, the search engine will understand that you like to be in charge and will suggest jobs that require this type of personal characteristic. It broadens significantly the scope of job seekers’ research and increases the chances of finding a job.

What is useful for job seekers is also very useful for recruiters; artificial intelligence offers interesting opportunities for human resources management as well.

How is artificial intelligence used in human resources?

When it comes to recruiting new employees, big companies often have to face an overwhelming number of applications and the recruitment process can be very long and tedious. That’s when artificial intelligence can be a great help!

On websites such as Yatedo talent, the recruiter can type a specific profile in the engine’s search bar. Then, algorithms analyse all the open sources on the web, mainly online CVs and professional social networks, such as LinkedIn. In just one click, the engine is able to propose a list of relevant profiles, thus helping the recruiter skip through the whole search and sorting process.

As said before, the search engine can deduce a candidate’s soft skills and personality traits. One way is by cross-checking CV information and detecting passionate profiles. For instance, if a candidate has practiced web development as a hobby but has a degree in history, he will be considered passionate by the search engine because he is a versatile, self-taught individual. This can also help the recruiter to find specific profiles well-suited to a certain job.

Does this mean that artificial intelligence will replace human resources managers? Thomas Allaire, creator of Jobijoba, says that this information technology was made precisely to help human resources managers save time by skipping the profile search so they can focus on the job interview. As the technology cannot replace the sensibility and human intuition of the recruiter, artificial intelligence actually assists in providing a more humanized recruitment process.

If you are interested in artificial intelligence, be sure to read our other blog posts related to this subject:

Google Translate and Its Artificial Intelligence Can Work in Offline Mode

Artificial Intelligence in Translation: The Future Role of Human Translators

Will AI Lead to the Development of a New Form of (Universal) Language and, Therefore, to a New Conception of the World?

Since the dawn of sci-fi movies, we have seen characters being accompanied by an AI companion, be it an android or the central computer of their spaceship, who acted as a friend during their adventures. Today, we have yet to see such technology in action since our AI development is not that advanced (and, according to certain people, it should stay that way). Nevertheless, we still have access to a degree of artificial intelligence, such as Siri or Alexa, which assists us with simple tasks.

The chatbots of our day

A “chatbot” is an artificial intelligence able to keep up a conversation to some degree with a person. Today, most of these chatbots exist to serve customer services or other B2C operations of some corporations, which are very limited and mostly serve to lead the user to the right department to solve their problems. These bots are not capable of holding a real conversation; they can only respond with predefined replies to the expected questions. They are available on companies’ websites, Facebook pages, or popular chatting applications such as WhatsApp or WeChat.

More “advanced” examples of these bots are Alexa, Google Assistant or Siri, which are capable of understanding and performing a variety of commands, such as a quick Google search on a subject, setting up an alarm, writing a reply to someone, etc.

How about befriending an emotional chatbot?

All these chatbots are only there to be our colleagues or assistants. At the end of the day, none of them ask us about how our day was. Well, now there is one who cares about you and wants to befriend you. Replika, created by Eugenia Kuyda, became available in November 2018. The main idea is to have an AI friend who asks you questions everyday to initiate conversation, with the aim of getting to know you better.

The idea is that Replika asks you questions and, as the conversations advance, it levels up by gathering information about you. This allows it to hold more detailed and personal conversations. You can also rate the replies it gives, since sometimes its answers demonstrate a real lack of empathy.

Screenshots posted on the Replika subreddit by the users TheAIWantsUsDeadand Here4DeepFakes

Don’t be mistaken, Replika isn’t the Sci-Fi AI companion that can do anything for you AND be your friend. Replika can’t really do anything besides talk. It’s only there for a conversation, to ask you if you feel alright or if you’ve had a stressful day. So, don’t expect it to be the perfect, self-conscious AI companion. It’s not perfect and it makes mistakes.

Replika seems like an interesting experiment if you’re not creeped out by artificial intelligence. It might be a good idea to visit the Replika subreddit to see the friendly chatbot in action before deciding to give it a shot. At the end of the day, which one of us wouldn’t like to have someone to listen to our problems?

Have you ever interacted with a chatbot or an AI like Replika? What was your impression? Let us know in the comments below!

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCLoc Master’s Program?

Click HERE to visit the homepage.

Thanks from the TCLoc web team.


Artificial Intelligence has become a part of our everyday life. Whether we are on our phones, cooking, or buying our groceries, we are constantly surrounded by it. With new innovations every day, the place of artificial intelligence in our society is growing to the point where the human domination over it becomes questionable. Books, movies, and video games have depicted futuristic societies where robots have taken over. Such scenarios lead us to wonder: in time, will artificial intelligence represent a danger for the human race? 

Artificial intelligence, our faithful companion

“Alexa, play my morning playlist.” “Siri, set my alarm for 8 a.m.” For some, the first and last interaction of the day is with an artificial intelligence device. That is how ubiquitous the presence of artificial intelligence has become. Artificial intelligence (AI) is defined as a set of complex techniques and theories implemented to “replace” human action. And that is exactly what it does. Cleaning, cooking, shopping, all these actions can be done just by saying some words to your connected watch or your virtual assistant. That is how powerful AI is. 

Tell me what you like and I will tell you what you are thinking about

Over the years, Artificial intelligence has become more and more powerful for one simple reason: it knows what you’re thinking. This power was acquired by associating AI with the analysis of big data. What is “big data”? Well, it represents any form of information that you put online, whether it be on social media, e-commerce platforms, or your internet search engines.

All this information is collected, shared, analyzed, and used by companies through their AI to know your habits, thoughts, and desires. Once they know what you need, new types of AI are created to respond to your desires. One universal, primal desire is to have a relationship or companionship. That is the specific purpose of some AI: to create an emotional bond with a human being. When emotional closeness between humans can be replaced by a relationship with an AI, where is the limit?

Faster, stronger, cheaper

Artificial intelligence devices are not only replacing humans in relationships, but also in the workplace. With each technological breakthrough, machines are able to do more and more jobs originally done by humans. As a consequence, a two-year study from McKinsey Global Institute foresees that by 2030, 30% of human labor could be done by robots. Robots are machines designed by us to do what we are able to do, but faster, while costing less money. They are slowly becoming a more profitable option than actual human beings.

While the industry sector is the most impacted by the growth of technology, the next sector that will suffer from job losses is the public sector. In the future, it is possible that nurses, receptionists, teachers, and even doctors could be slowly replaced (at least in part) by robots capable of doing the same job. 

Artificial intelligence is based on human intelligence. It is based on what we do, what we need, what we feel, and how we function. The only difference between us and AI devices is that we have basic needs that need to be fulfilled, while they are driven by algorithms. They can do what we can, but without putting any effort into it. If we keep improving AI, what will prevent AI machines from becoming superior to their creators? Many scientists have predicted the rise of artificial intelligence and have warned the public. Even world-renowned physicist Stephen Hawking urged that we need to control the technology we create before it destroys us.

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCLoc Master’s Program?

Click HERE to visit the homepage.

Thanks from the TCLoc web team.


Neural Machine translation, or NMT, is a fairly new paradigm. Before NMT systems started to be used, machine translation had known several types of other machine translation systems. But, as research in the field of artificial intelligence is advancing, it is only natural that we try to apply it to translation.

History of Neural Machine Translation

Deep learning applications first appeared in the 1990s. They were then used, not in translation, but in speech recognition. At this time, automatic translation was started to regain momentum, after almost all studies on the subject were dropped in the 1960s, because machine translation was believed to cost too much for very mediocre results.

Rule-based machine translation was then the most used type of machine translation, and statistical machine translation was starting to gain importance.

The first scientific paper on using neural networks in machine translation appeared in 2014. After that, the field started to see a lot of advances.

In 2015, the OpenMT, a machine translation competition, counted a neuronal machine translation system among its contenders for the first time. The following year, it already had 90 % of NMT systems among its winners.

In 2016, several free neural MT systems launched, such as DeepL Translator or Google Neural Machine Translation system (GNMT) for Google Translate, for the most well known, which you can see compared here.

How Neural Machine Translation Works

These NMT systems are made up of artificial neurons, connected to one another and organized in layers. They are inspired by biological neural networks, capable of learning on their own from the data they receive each time someone translates a new document. The “learning” process consists in modifying the weight of the artificial neurons, and it is repeated during every new translation, to constantly optimize the weights, and thus the quality of following translations. NMT systems work with bilingual corpuses of source and target documents that have been translated in the past.

The translation itself works in two phases.

First, there is an analysis phase. The words of the source document get encoded as a sequence of vectors that represent the meaning of the words. A context is generated for each word, based on the relation between the word and the context of the previous word. Then, using this new context, the correct translation for the word is selected among all the possible translations the word could have.

After that, there is the transfer phase. It is a decoding phase, where the sentence in the target language is generated.

Deep Learning: Better, but Still not Perfect

Even though deep learning systems are the best machine translations systems to exist yet, they are not perfect, and cannot completely work on their own. As languages are being used every day, they are constantly evolving. Therefore, deep learning systems always need to learn, especially neologism and new expressions. And to learn about these new elements, they will always need the help of humans, whether it be to work on the systems directly, or to perform post-edition on translated documents.

Nonetheless, systems who can “learn” on their own represent a massive improvement, not only for machine translation, but also for any natural language processing tasks, as well as for artificial intelligence in general.
Neural machine translation still needs research and improvement, for sure. But it does represent a bright future for machine translation. Of all the people reading this article, most will have used a neural machine translation system before, whether knowingly or not. And if you actually haven’t, there is a good chance that you will at least try one now, for example: DeepL Translator.

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCloc Master’s Program? 

Click HERE to visit the homepage.

Thanks from the Tcloc web team

In March 2018, Microsoft announced a historical milestone: Microsoft’s neural machine translation can allegedly match human performance in translating news from Chinese to English. But how can we compare and evaluate the quality of different systems? For that, we use machine translation evaluation.

Methods of Machine Translation Evaluation

With the fast development of deep learning, machine translation (MT) research has evolved from a rule-based model to neural models in more recent years. Neural MT (NMT) is currently a hot topic. We have recently seen a spike in publishing, with big players like IBM, Microsoft, Facebook, Amazon, and Google all actively researching NMT.

Machine translation evaluation is difficult because natural languages are highly ambiguous. In order to evaluate MT, both automatic and manual approaches can be used. Manual evaluation gives a better result in order to measure the quality of MT and to analyze errors within the system output: adequacy and fluency scores, post-editing measures, human ranking of translations at sentence-level, task-based evaluations etc… The most challenging issues in conducting human evaluations of MT output are high costs and time consumption. Therefore, various automatic methods were proposed to measure the performance of machine translation output like BLEU, METEOR, F-Measure, Levenshtein, WER (Word Error Rate), ROUGE (Recall-Oriented Understudy for Gisting Evaluation), NIST (National Institute of Standards and Technology), etc.

BLEU : the most popular

Currently, the most popular method in machine translation evaluation is called BLEU. It is an abbreviation for “Bilingual Evaluation Understudy”. Originally introduced back in 2002, this method compares the hypothetical translation to one or more reference translations. The machine translation evaluation awards a higher score when the candidate translation shares many strings with the reference translation. The BLEU system scores a translation on a scale of 0 to 1, but it is frequently displayed as a percentage value: The closer to 1, the more the translation correlates to a human translation. The main difficulty here lies in the fact that there is not one single correct translation, but many alternative good translation options.

METEOR : an emphasis on recall and precision

The second most popular method in machine translation evaluation is METEOR. It stands for Metric for Evaluation of Translation with Explicit Ordering. Originally developed and released in 2004, METEOR was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast to IBM’s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences.

Besides different methods in the machine translation evaluation, some researchers see the lack of consensus in how to report scores from its dominant metric. Although people refer to “the” BLEU score, BLEU scores can vary wildly with changes to its parameterization and, especially, reference processing schemes, yet these details are absent from papers or hard to determine. 

Nevertheless, human and automatic metrics are both essential in assessing MT quality and serve different purposes. Good humanmetrics greatly help in developing good automatic metrics.

Don’t forget to share your thoughts in the comments below!  

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCloc Master’s Programme?

Click HERE to visit the homepage.

Thanks from the Tcloc web team

Why care about content quality ?

By and large, content is seen as some type of art project where creativity and writers’ preferences often guide its creation. I expect this to be less and less the case in the future.

As Scott Abel (publisher of the Content Wrangler), states in the interview Understanding the need for content quality management, there are surprisingly few companies that take content quality seriously when it comes to technical communication. Often, technical writers are expected to “wing it” and take responsibility for the quality of their writing as much as humanly possible. This stands in sharp contrast with the translation industry, which has been using sophisticated quality assurance tools for decades, which include:

  • glossaries and terminology banks
  • style guides
  • spell checkers
  • readability scores

However, he states that he expects that the situation will change in the near future, especially given that intelligent content now stands out as a powerful asset for the Industry 4.0. Intelligent content, as explained in this article by the Content Marketing Institute, is content that is “ structurally rich and semantically categorized and therefore automatically discoverable, reusable, reconfigurable, and adaptable.

Improving and managing content quality for technical communication provides substantial benefits:

  • Good source text results in good translation quality, and intelligent content reduces translation costs by increasing text reuse.
  • Consistent terminology is an essential factor for a positive user experience.
  • Systematic content quality checks relieve technical writers from worrying about small mistakes.
  • Well-structured modular content can be output to multiple channels, with little to no human intervention, resulting in time and cost gains.
  • Semantically tagged content is SEO-friendly, resulting in better exposure for the company website, and boosting profits in the long term.

DITA and style guides

DITA (Darwinian Information Typing Architecture) is an open-source xml standard designed by OASIS for writing technical documentation. It is especially suited for intelligent content, since it is based on a modular architecture, where each piece of information (or “topic”) can be reused across multiple documents (or “maps”).

The content is separated from the presentation, which is handled automatically during publication, so that the author doesn’t have to worry about layout guidelines. On the other hand, writing in xml requires a specific set of editing rules – to select the right tag or to impose a certain set of attributes for example. Multiple DITA style guide projects have emerged across the web, the most authoritative being The DITA Style Guide Best Practices for Authors by Tony Self.

Enter the Dynamic Information Model

The Dynamic Information Model (DIM) (1) is an open-source project published on Github created by George Bina (Syncro Soft SRL) with the contribution of ComTech Services. It provides a toolkit and templates to create an integrated style guide in DITA, in which every rule can both be described and implemented within the authoring tool. The toolkit is designed to:

  • Publish an html version of the style guide.
  • Trigger warnings or suggestions when one rule is not respected, optionally with automatic corrective actions.
  • Point back to the rule so that the author can understand the error.

Behind the scenes, the implementation is handled by a library of Schematron rules and Schematron Quick Fix (SQF) actions, as well as XSLT templates to compile the rule set. The project is designed for a full integration into oXygen XML, but it can also be adapted for other tools that provide Schematron and SQF support.

The goal of this project is to be accessible for all technical writers without requiring any knowledge of Schematron, SQF, or XSLT. Each rule is described in a separate topic using DITA markup, in the form of a definition list (dl) placed within a section and marked with a specific audience attribute. The embedded rules use patterns from generic rules defined in a Schematron library. To enforce a rule, the user simply needs to refer to the rule name and specify the parameters.

Some of the predefined rules are:

  • restrictWords: Check the number of words to be within certain limits.
  • avoidWordInElement: Issue a warning if a word or a phrase appears inside a specified element.
  • avoidEndFragment: Issue a warning if a an element end with a specified fragment or character.
  • avoidAttributeInElement: Issue a warning if an attribute appears inside a specified element.

What does it look like ?

Let’s have a look at one of the sample rules provided in the project:

<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE concept PUBLIC “-//OASIS//DTD DITA Concept//EN”
“concept.dtd”>
<concept id=”AuthoringGuidelines”>
<title>Beginning a Topic</title>
<conbody>
<p>With the exception of glossary topics, you must include a title   and prolog section before you begin the body of the topic. In addition, you can optionally include a short description of the topic. The following sections provide guidelines for these common elements.</p>
<note>
<p>When creating a new topic, always start with the <keyword keyref=”companyname”/> template for the corresponding information type, if one is available. If you copy another topic, you could
inadvertently duplicate element IDs and you risk overlooking elements that you might need for the new topic that were removed from the topic you copied.</p>
</note>
<section audience=”rules”>
<title>Business Rules</title>
<p>We will recommend adding a prolog to different topic types, except for the glossary
topics.</p>
<dl>
<dlhead>
<dthd>Rule</dthd>
<ddhd>recommendElementInParent</ddhd>
</dlhead>
<dlentry>
<dt>parent</dt>
<dd>task</dd>
</dlentry>
<dlentry>
<dt>element</dt>
<dd>prolog</dd>
</dlentry>
<dlentry>
<dt>message</dt>
<dd>A prolog is required for each task. Add this just before the task body.</dd>
</dlentry>
</dl>
<dl>
<dlhead>
<dthd>Rule</dthd>
<ddhd>recommendElementInParent</ddhd>
</dlhead>
<dlentry>
<dt>parent</dt>
<dd>concept</dd>
</dlentry>
<dlentry>
<dt>element</dt>
<dd>prolog</dd>
</dlentry>
<dlentry>
<dt>message</dt>
<dd>A prolog is required for each concept. Add this just before the concept body.</dd>
</dlentry>
</dl>
<dl>
<dlhead>
<dthd>Rule</dthd>
<ddhd>recommendElementInParent</ddhd>
</dlhead>
<dlentry>
<dt>parent</dt>
<dd>reference</dd>
</dlentry>
<dlentry>
<dt>element</dt>
<dd>prolog</dd>
</dlentry>
<dlentry>
<dt>message</dt>
<dd>A prolog is required for each reference. Add this just before the reference body.</dd>
</dlentry>
</dl>
<dl>
<dlhead>
<dthd>Rule</dthd>
<ddhd>recommendElementInParent</ddhd>
</dlhead>
<dlentry>
<dt>parent</dt>
<dd>troubleshooting</dd>
</dlentry>
<dlentry>
<dt>element</dt>
<dd>prolog</dd>
</dlentry>
<dlentry>
<dt>message</dt>
<dd>A prolog is required for each troubleshooting topic. Add this just before the
troubleshooting body.</dd>
</dlentry>
</dl>
</section>
</conbody>
</concept>

This editing rule stipulates that each topic, except for glossaries, must start with a title and a prolog. Since titles are already mandatory in DITA, only the prolog rule must be enforced. The definition list references the predefined pattern “recommendElementInParent” and associates the parameters “parent” with each topic type, “element” with prolog, and “message” with the message that must be displayed in case the rule is not followed.

Making it your own

In order to extend the rule patterns library, users should get to grips with the basics of Schematron. Schematron is a basic xml-language used to search patterns in xml files. When a pattern is found (or not found), a defined action is performed, which can be an automatic correction, a suggestion or a warning.

Find out further information here : Schematron: a handy XML tool that’s not just for villains! 

(1)Apache License 2.0, copyright Syncro Soft SRL – 2015

Nowadays, we tend to focus on progress, no matter what it involves for us. We don’t really think about the consequences of what we develop, Could this lead to complications? Could it be potentially dangerous? It’ll just help us in our daily lives, be an improvement, an enhancement of some sort. Of course, not every discovery will turn into a new Manhattan Project, but it doesn’t have to stop us from thinking about what it implicates for us in the long term.

The goal of this article is not to warn you about the ethical problems caused by the scientific rush for artificial intelligence – you can find these in this interview by Thinkerview, where Eric Sadin, French philosopher and writer explains his point of view after years of in depth research about that subject. Together, we will dive into a thought experiment, where an AI, in the near future, will be able to create its own language and therefore (assuming that the Sapir-Whorf hypothesis is true), lead to a new conception of the world. We can’t imagine what that world will be, yet. However, it will surely be different from what we know, as is the case after every major discovery.

Computer programming as a universal language.

A language is an encoded way to communicate, to express ourselves through a set of rules and using various means : sounds, signals, gestures, written and typed characters. It’s a tool used to transmit intelligible information from a sender to a receiver.

To understand what a universal language is, we should take a closer look at mythology. In the myth of the Tower of Babel, every human being spoke the same language when they worked together to build the tower. It was only when they finished that God destroyed the tower and made them speak different languages as a punishment for their Hubris.

A universal language is common to everyone. There is no such thing at the moment, not everyone speaks English or Chinese or Esperanto. We can define universal language as a common ground for working and living together.

Computer programming doesn’t differ from spoken languages in its definition. It’s a set of rules and tools, used to transmit an information from a sender to a receiver. A various set of instructions that a machine must follow, encoded in a way it can understand.

In some ways, computer programming as a whole (not taking into account every single programming language) is a universal language. It allows people everywhere on Earth to work on the same project and understand each other (for example : open source programming like LINUX).

Language shapes our perception of the world.

Edward Sapir and Benjamin Lee Whorf expressed the hypothesis of linguistic relativity (weak hypothesis) and linguistic determinism (strong theory). Without going too deep into the Sapir-Whorf hypothesis, it states that language determines thought and therefore, decisions. We will focus on the lesser theory here, as it’s one that linguists can generally agree on. So, according to this theory, individual’s thoughts and actions are determined by language that an individual speaks.

“We dissect nature along lines laid down by our native languages. […] We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way. […]” – Benjamin Lee Whorf (1940 : 213-14)

The movie Arrival (2016) by Denis Villeneuve uses this theory and pushes it to its limits. Aliens have developed a language that can change humans’ linear perception of time, allowing them to experience “memories” of things that have yet to happen. The aliens are willing to share their language freely, but humans misunderstand the word “tool” for “weapon”, which will build the tension throughout the movie.

Another illustration of this hypothesis can be found in the book 1984, by G. Orwell. The government uses language to control the thoughts of its citizens by removing words from usable vocabulary. “Newspeak” – as it’s called in the book-acts as a tool to prevent people from thinking about anything that could hurt the government.

Will AI be the keystone to a new perception of the world ?

Artificial Intelligence is the simulation of human intelligence through processes such as learning, reasoning and self-correction. Historically, AI was only called so if it could succeed the Turing Test. It is now obsolete since it can be beaten by the most basic chat bots. Another classification comes from Arend Hintze, as he describes different types of AI and sorts them in 4 categories :

  • Type 1 : Reactive Machines (Deep Blue), developed for a specific situation.
  • Type 2 : Limited Memory (self-driving cars), observations are not stored permanently.
  • Type 3 : Theory of Mind (doesn’t exist yet), can take into account that decision making differs according to one’s beliefs, desires or intentions.
  • Type 4 : Self-awareness (Halo : Cortana), an AI with self-consciousness, it’s only science fiction for now.

Historically, every major discovery has changed our perception of the world or the way we interact with it. From the discovery of fire to the invention of the Internet, each discovery has directly, or indirectly helped us take a step forward into the future. Take for example heliocentrism, and how it changed perception of the world, back in 1543. According to Arend Hintze’s definition of AI, the moment we create a type 4 AI will be when the tides will turn.

In the end, it all relies upon whether or not AI is able to create a new language. We already can find a few indications that might help us answer this question : in November 2016, two artificial neural networks were able to exchange encrypted information. The encryption method they used evolved throughout the experiment to the point where they finally created their own method where neither its creators, nor a third neural network (the observer) could crack it.

A neural network can’t be qualified as “AI” since it’s only able to accomplish one specific task. However, this experiment took place more than two years ago, and major progress has been made in the same domain more recently. We imagine that it’s only a matter of time until real communication between two Artificial Intelligences happens.

We’ve never known our world better than we know it today. Progress in science helps us to understand the world we live in. Perhaps the limitations we encounter can’t be overcome by a more precise microscope or telescope, maybe we aren’t asking ourselves the right questions :

The limits of my language mean the limits of my world.” – Wittgenstein, Tractatus logico-philosophicus (1922).

Artificial intelligence is happening, not tomorrow, but today. It’s a huge leap forward in terms of simulation, error checking, medical assistance, translation etc. We can’t stop it and why should it be stopped, even if we could? It’s our responsibility to design it in the best way we can to ensure it won’t slip out of our control. This is the main goal of this article, to make sure people understand that AI is the most important discovery of this century. As such, we can’t skimpon any aspect of its development and we must consider and prepare ourselves for the infinite changes it could make to our lives.

Through a recent update, Google has optimized Google Translate by reducing the file size for each language to only 35 MB as well as improving how it translates content. Google wants to allow more people to access its artificial intelligence-based translator offline. Available in 59 languages, the new Google Translate makes fewer translation errors than in previous versions, thanks to an algorithm that takes into account the entire sentence, rather than translating word by word.

Offline mode, as acclaimed by users

As an essential tool of travelers and readers of multilingual content worldwide, Google Translate wants to appear on as many smartphones as it can. It must, however, fulfill one particular criteria: to be accessible offline. Not everyone has easy access to mobile internet, especially if they are traveling abroad or away from home. Offline availability is one of the most popular requests from users, according to a study by Julie Cattiau, French project manager at Google Translate.

Since the latest Google Translate update in June 1st 2018, users users can now download small modules of 35 MB, one for each of the 59 languages available. The languages range from the most common — English, French, Spanish — to the most exotic — Galician, Tamil, Swahili. These small modules if information can be stored in the memory or SD card of any smartphone on the market.

Sentence by sentence translation (rather than word-for-word)

Google Translate also delivers better results. The new algorithm now uses the resources belonging to an Artificial Intelligence engine developed by the Google Brain deep learning team. Google Translate can now analyze the sentence as a whole, in order to translate as accurately as possible. This does signal the end for ‘word-for-word’ translations, which made it possible to get by when translating between two similar grammatically structured languages – but quickly became gibberish when two idioms were far apart.

Note that the new algorithm does not work yet for handwritten sentence translations, or those captured in augmented reality through a phone camera via the app.

Nadine Vitalis

Source: https://www.clubic.com/pro/entreprises/google/actualite-844043-google-traduction-ia-telechargeable-fonctionner-hors-ligne.html

Could it really be? A Breakthrough after decades of research in machine translation? Microsoft have recently stated that their machine translation research team has reached ‘human translation quality’. This leads to the question: Will translators soon be replaced by machines? This article will shed light on the future role of human translators and linguists in the era of artificial intelligence (AI) and neural machine translation (NMT).

Quality of neural machine translation output

If you continue to read Microsoft’s article, you will learn that their claim only applies from Chinese to English translations of news articles. Researchers don’t even know whether human parity can be reached for every language pair. Other experts question the assessment of machine translation quality. MT assessment focuses strongly on sentences without their context. The use of anaphora such as pronouns is not assessed correctly. Sometimes low quality human translations serve as reference for the assessment. Currently, NMT only seems to be yielding its best results under artificial conditions when the context is controlled.

Current implementation of artificial intelligence

Could there be a more realistic view of AI and NMT in the language industry? Researchers predict that translators will be able to use machine translation in their daily life as a work tool which will perform repetitive tasks. This means that machines will carry out routine tasks. However, some would argue that this is already happening today!

Artificial intelligence is already part of various CAT (Computer Assisted Translation) tools in the form of:

  • predictive typing functions and vocabulary suggestions from previously translated content.
  • fragments of sentences which are automatically inserted or suggested with accuracy up to 100%, as well as ‘fuzzy matches’ from the translation memory bank.
  • machine translation engines can be integrated into CAT tools through API.

Nowadays, translators already work intensively with artificial intelligence. Post-editing of MT output is part of most translators’ daily life, but often they are unable influence the process and its outcomes.

More interaction between human translators and tools

How will AI change the work of human translators in the future?

Researchers predict that technology and NMT will play a key role in the future of translators and linguists. Based in Massachusets, USA, ‘The Common Sense Advisory’ team have been developing a concept called “Augmented Translation.” Just as “Augmented Reality”  has its applications, this concept will make relevant information accessible to translators when they need it. The translators will be in the center of various technologies. Some of them are already in use, some of them are currently awaiting validation by respective governing bodies.

Besides technologies such as translation memory and terminology management, translators will benefit from adaptive neural machine translation and automated content enrichment.

But what are the benefits from adaptive NMT and automated content enrichment (ACE)?

Adaptive NMT learns from the feedback the translator inputs. It adapts to the translator’s writing style, automatically learns terminology and works on a sub-segment level. This means that linguists can actively influence the translation suggestions the system provides.

Automated content enrichment helps the translator by giving information on ambiguous words and helping translators to localize a variety of content to numerous cultures. It is strongly connected with terminology management, as it searches through a terminology database.

Within the concept of “Augmented Translation,” translators will have instant access to all relevant information needed when translating, meaning that they won’t have to look up the previous translation of a word which is not in the terminology, they won’t have to disambiguate words using different dictionaries and countless Internet searches. Translators will finally be able to influence the suggestions of neural machine translation systems. Terminology systems and the contained metadata are only set to gain in importance as more and more important translation and localisation several gain access to it.

If you want to learn more about artificial intelligence, you may find this video about Google’s Deep Mind inspiring!

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCloc Master’s Programme? 

Click HERE to visit the homepage.