In this article, you will learn how to survive as a Technical Communicator in an Artificial Intelligence – determined Future.

Table of Content:

Artificial Intelligence Evolves

Do you follow the news of the recent developments in Artificial Intelligence? Have you, as a technical communicator, been affected by some harbingers of the technical transition ahead? Maybe you already noticed the supportive effect of machine learning systems but also the challenges and changes that come with them? 

With a transition comes uncertainty. If you feel uncertain about Artificial Intelligence and how it will affect your professional field of technical communication, try first to assess it in relation to the prior transitions which have already occurred. As such, only two immense transitions we have gone through will be associated with globalization and the digital revolution. From those transitions we derive the understanding of the extent of the transition ahead: it will be total. 

Artificial Intelligence gains ground. IBM Research calls it “fluid Intelligence” which means that Artificial Intelligence is supposed to “[…] combine different forms of knowledge, unpack causal relationships, and learn new things on its own.” The capability of learning independently probably is the most frightening to us as we seem to have had reserved this process for humankind. Still, as the digital revolution came to us and occupied many areas of our professional and daily life, many of us got used to the new accelerated pace of life very quickly even though the internet and all its supporting devices showed a certain intelligence that had not been known till then. Be aware of the fact that the human suspicion towards technical devices is as old as time.

Recent discussions about Artificial Intelligence point out that the work in professional fields with a high degree of education may be completely or partly automatized in the future. According to those debates, the standardization which is now a well-paid and important part of many professions will turn into a replaceable part. Consequently, it is important to get prepared in order to weather the transition and to position yourself accordingly for the future. 

Below, I will share useful insight and well-conceived recommendations which can form your starting point as well as shape your perspective for your future as a technical communicator in an Artificial Intelligence-determined environment.

Find Your Competences Beyond Standardization

I started as a regular freelance translator, who was very much interested in translating poetry or fictional literature, but I found out very soon, that this was not the path to pursue. Five years later I ended up as a legal translator, who translates all kinds of legal documents which are highly standardized, but cost the clients much more than the translation of a sophisticated poem by an unknown author. My story does not end here as I chose to apply for the TCLoc Master’s Program which will guide me to find my place in the world of technical communication. 

This is just one story out of so many that highlights how the professional fields that demand high levels of education also consist of a high degree of standardization. Standardization can also be sophisticated, but it is not creative and it is easily conquerable by well-functioning, intelligent algorithms as we can see demonstrated already by DeepL, a neural machine translation solution out of Hamburg. Consequently, fields involving standardization are the ones who will likely suffer initially from the transition ahead. We have to find out how we can preserve our value if this truth is going to toy with our competencies. 

We don’t know exactly what Artificial Intelligence will look like in the future, but we can already tell a bit about the shape it will take. Fluid intelligence might turn our understanding of our own intelligence upside down: new challenges will arrive with new competencies. With this, our understanding of higher-education will experience some twists and turns as the importance of human skills will probably change. 

Define Your Future Competence Area in Collaboration with Artificial Intelligence

So let’s have a practical look into this. Technical communication consists of two parts: technical and communication. The technical part will probably be overtaken very quickly. It is not the degree of complexity that is to be taken into consideration. It is the amount of logical and maybe repetitive working parts which can be easily assessed by an algorithm. The writing of mostly standardized documentation such as manuals, the fulfilment of legal requirements regarding the target market of the product and the products’ assembly instructions might be automatized soon.

The situation is different with communication. As we all know, language professions are already transforming. Machine learning systems have already shown to have a huge impact on the daily work of the translator. But, still, there is this human part in communication which is non-existent on the rational level. Humans don’t want to communicate with robots. Not even if the machine is more intelligent than they are. 

A technical communicator has to coordinate and  has to communicate plans of coordination with many people in the company. The technical communicator has to motivate the team in order to reach common milestones and in order to celebrate accomplished projects. Additionally, and most importantly, the technical communicator has to communicate with the target group. This individual has to adapt the documentation to this special group consisting of humans which has not only to be analysed but also understood. It is important to concentrate on the parts of work which will not be overtaken even though the tools will appear to have the capability to work on those tasks satisfactorily.

Develop Your Profession with Artificial Intelligence as a Supportive Tool

Artificial Intelligence might sound frightening to us right now. Nonetheless, it is deep within the history of humankind that humans have to prepare for unknown future transitions. Artificial Intelligence can be embraced and welcomed as a new tool which will turn our world upside down and which will make it better on the one side and worse on the other. Transition comes with both advantage and disadvantage.

The most important thing is to perceive the advantage and minimize your personal disadvantage. Artificial Intelligence will definitely be an enriching tool for many professions and part of our personal daily life. We have seen a similar development with the invention of the mobile phone. Many of us surely wouldn’t consider reversing this invention. We have to adapt to the upcoming perspectives and we have to educate ourselves.

Artificial Intelligence is not a mysterious miracle which cannot be understood. We, as technical communicators, have to find information ourselves and we have to adapt documentation to certain target audiences. Let’s regard Artificial Intelligence as a new product which we have to write the documentation for. Write the documentation for yourself. You are the target group. Get acquainted with the new tool which we will have at our disposition in the future, and analyse it in order to get prepared.

Get Involved into Your Company’s Future Needs

After you’ve sufficiently analysed your own target audience regarding Artificial Intelligence, do the same with your company. You can be the person who is aware of the future transitions and who will be able to recommend strategies to your company in order to master the challenge in a beneficiary way. As this is also part of the job profile of the technical communicator: raise awareness in your company for certain needs.

During the Tekom training I’ve learned a lot about the fact that technical communication is a new field of work which has to be justified within companies as they often don’t see the need of investing in high quality documentation. Consequently, you, as a technical communicator, are able to take your experience concerning in-house negotiations and enlarge it onto the future role of Artificial Intelligence in the company.

Start Now !

To express this in a nutshell: 

  • Adapt your knowledge and your professional routines to Artificial Intelligence. 
  • Take responsibility for your own future and do what you can do best: 
    • analyse the new product, 
    • educate yourself about new possible parts of your professional field, 
    • find strategies in order to implement Artificial Intelligence into your professional environment as a supportive tool.
  • Find solutions for your company and develop strategies that will allow your company to not only weather the transition, but also to find its new position on the market afterwards.
  • Find the part of your work that will remain irreplaceable in an Artificial Intelligence-determined future.

On our Master’s Program blog there is already very useful information and interesting articles about our future with Artificial Intelligence, so I recommend you starting right here!

A quarter of Americans have at least one smart speaker device. But what languages can these devices speak? In this article we will discuss the challenges that the localization of smart speakers may entail.

What Is A Smart Speaker?

A smart speaker is a speaker with an integrated virtual assistant that enables human-machine interaction. We can ask our smart speaker trivial questions about the weather or latest news, or make it the central hub for our smart home to control our electronic devices, be it a fridge or a TV. In the past few years, smart speakers have entered many homes all over the globe. According to The Smart Audio Report, in 2019, 24% of adult Americans were using at least one smart speaker device. But what about other countries? How hard is it to localize a language for a smart speaker?

What Languages Can You Speak with Your Smart Speaker?

The most popular smart speakers that support multiple languages include Amazon Echo, Google Home and Apple HomePod. The big market players are working hard on introducing more and more languages to their smart speakers to reach a bigger audience. Currently, the number of languages supported by virtual assistants is as follows: Amazon’s Alexa – 8, Google Assistant on Home devices – 13, Apple’s Siri on HomePod – only 6 languages. These smart speakers also speak some dialects of English, French and Spanish. However, overall these numbers do not seem that impressive. Why do the global market players not localize their smart speakers in more languages?

Localization Challenges for Smart Speakers

Language localization for a smart speaker is a costly and elaborate development process, as it requires the collection, analysis and testing of a vast amount of speech data. We cannot just take the strings of information used to generate the input and output for an English-speaking smart speaker and translate them word-for-word into, say, Russian. The goal of a virtual assistant is to imitate natural human conversation, so we expect our smart speaker to have not only grammatically correct speech, but also some knowledge about our local culture. For example, we may want it to tell us the local news or a joke that will be understood in a given cultural context.

In 2018 Google announced the localization of its virtual assistant into 30 languages. As of 2021, Google has not even reached half of this goal. This can be explained by the fact that it takes the same amount of effort and money to teach a virtual assistant to speak languages with only a few million native speakers as it does to introduce a much more commonly spoken language. It is no surprise that many smart speakers can speak Spanish (483 million native speakers), but not Latvian or Slovenian (1.3 and 2.5 million speakers respectively). It is therefore sensible to assume that it will probably take some time until languages with fewer native speakers become available on smart speakers – not so much because of their linguistic complexity, but due to their lack of potential profitability.

The big smart speaker producers have not only been slow in localizing the less widely spoken languages, but also some of the more widely spoken ones, such as Chinese and Russian. This delay opened the door to local tech companies to introduce their monolingual smart speakers. For example, the Yandex.Station smart speaker with its integrated virtual assistant Alice has literally no competition on the Russian-speaking market as there is no other smart speaker supporting the Russian language. The advantage of such monolingual smart speakers is that local companies may concentrate on developing a product specifically designed for their own market. This allows them to create a product which sounds more authentic and is better able to reflect the specific local cultural nuances than its international counterparts.

Future Prospects of Smart Speakers Across the Globe

Artificial intelligence technology is constantly developing, giving us reason to believe that the costs for smart speaker localization will decrease with time. This will allow smart speakers to speak more languages and acquire more functions, making them increasingly useful for many households all over the world.

What is your experience with smart speakers? Do they speak your native language yet? Let us know in the comments. 

Do you want to learn more about localization? Apply to the TCLoc Master’s Program now!

Machine Translation is getting more and more advanced but will it be able to replace human beings in the future? Let’s find out in the article below.

In the wake of Keywords Studios’ acquisition of machine translation provider KantanMT, Neural Machine Translation (NMT) has become a hot topic among language professionals and service providers specializing in game translation. Are we getting closer to the point where machines  will be able to translate games with little or no human intervention? I believe that day is still a long way away, but NMT is here to stay — and it is already proving useful in some applications.

Context is everything

Gone are the days when the often nonsensical output of Google Translate was an endless source of hilarity for professional linguists. NMT is a huge step up over earlier machine translation efforts, but it still has considerable limitations, including the fact that it is not context-aware — and context is everything in game localization.

In software, even more so than in general translation, words may completely change meaning depending on where they appear. An innocuous term like “save” may refer to saving the game in a system menu, to saving money in a promotional message, or to saving a character in a mission prompt. Each of these meanings would be rendered differently in most non-English languages. Dialogue is even more challenging: when they translate games, localizers need to know who is talking to whom and how the characters relate to each other in order to achieve the right tone and make the dialogue flow naturally.

Human translators can gather this contextual information from notes and loc kits provided by the development team; when this information is not available, they can make educated guesses based on their familiarity with the game and clues found in string IDs, file structure, etc. In contrast, NMT systems lack the ability to make such inferences, and more importantly, since they are unaware of the limits of their knowledge, they will not think to ask the developer before making incorrect assumptions.

Game localization poses unique challenges

Creativity is key in game localization: games are meant to be immersive experiences, and their spell is easily broken by an incorrect or unimaginative translation. Particularly in the fantasy and sci-fi genres, games often feature made-up names for characters, locations, and concepts, all of which need to be adapted to suit the genre conventions and players’ expectations. This demands a native speaker’s knowledge of the target culture and a kind of creativity that is beyond the ability of current machine translation systems. In fact, there is little scientific consensus on whether general artificial intelligence that is capable of proper creative thought will arrive in our lifetimes.

The ability to think creatively also comes into play when dealing with variables or “tokens”. These are often used by developers to create messages that change dynamically depending on the in-game situation. However, every language is structured differently, and messages using variables often need to be adjusted to ensure that the translation respects the target language grammar and syntax. This requires sound linguistic judgment and the ability to think outside the box, skills that remain solely in the purview of human translators.

Neural machine translation also struggles with terminology: generic NMT engines use generic terminology and cannot be forced to adhere to a specific glossary. You can influence an engine’s term choices by training it with carefully selected text, but you will rarely have a large enough corpus to train a franchise-specific engine. Besides, platform owners often require compliance with their own system-specific terminology; otherwise, they may refuse to clear the game for release, so this can be a major hindrance to the use of NMT.

Fitness for purpose

Is NMT entirely useless for translating games, then? Far from it. In fact, in a recent article for MultiLingual magazine, Cristina Anselmi and Inés Rubio, from Electronic Arts, identify several good use cases for neural machine translation in games.

For example, a properly trained, platform-specific NMT engine can make short work of repetitive system text, although post-editing remains necessary to ensure compliance with platform requirements. Post-edited NMT can also provide good quality documentation at a low cost. Even “raw” (unedited) machine translation is proving useful in first-line customer support for companies lacking the resources to operate a multilingual support center.

In the long run, NMT offers exciting opportunities for game design — from translating procedurally-generated content in real time to making customer-created content available to an international audience — situations in which players may be willing to trade some quality for an endless supply of fresh content.

Machine translation is a tool

At the end of the day, NMT is just one more tool in the localization arsenal, hugely powerful if used properly, but with equally important limitations. For the foreseeable future, professional human translators must remain at the center of the localization process, but there is no reason why NMT cannot be used to translate ancillary content or to boost localizers’ productivity in the less creative aspects of game translation.Intrigued about the possibilities of NMT and its applications in localization? You may want to check out other blog articles on machine translation in the TCLoc blog, or you may even want to consider signing up for the TCLoc Master’s degree, which covers this and many related topics.

In this new, confined world, some companies continue to operate and sometimes need to recruit. In recent years, we have witnessed the emergence of online, remote recruitment tools. This has been made possible in particular thanks to artificial intelligence (AI), which is playing an increasingly important role in HR operations in general, and more specifically in the recruitment process.

A technological breakthrough for Human Resources departments

The LinkedIn Global Recruiting Trends survey conducted in 2018 showed that 76% of recruiters surveyed believe that artificial intelligence has a significant impact on the quality of candidates and hires. And that it brings greater diversity to recruitment. Artificial intelligence has therefore been integrated into the recruitment processes of many companies for several years now, and even more of them have begun using it this last year, with the widespread use of telecommuting. AI can facilitate remote recruitment processes, just as recruiters do during face-to-face interviews. It also enables employers to recruit more quickly, as machines can analyze a larger number of applications. This is attractive to recruiters, because AI represents time savings for companies, and ultimately, probably cost savings as well.

From simple chatbots…

Talking to robots is nothing new–we’ve been doing it for several years now. Many companies use chatbots as part of their recruitment process. These chatbots immediately answer questions asked by candidates. This form of AI enables an initial sorting of applications by analyzing predefined criteria, which can be, for example, candidates’ prerequisites, their training, or their experience, before sending the most relevant profiles to recruiters. Recruiters can also select specific questions to ask candidates.

…to the complete analysis of a candidate

But the technology doesn’t stop there. At the time of the first home confinement in France , robots of a new kind had been developed. As teleworking became widespread this past year, it became more difficult for candidates to travel, and for recruiters to receive them in business offices. Many recruiters use video interviews, which can then be analyzed by robots thanks to AI. For example, in interviews that are not in live, candidates record their answers to questions about their skills.

AI can analyze these videos faster than a real person would, by transcribing audio data and analyzing the clarity of language and the mastery of certain concepts. But there are also tools that can detect the emotions of the candidates via facial recognition, and thereby  establish  their levels of stress and anxiety, or analyze body language and non-verbal communication. Thus, artificial intelligence enables the complete analysis of candidates’ verbal and body language.

Artificial intelligence for recruiting, all right, but well supervised!

We can also reflect on the question of the ethics of these new recruitment methods. Artificial intelligence can indeed lead to forms of discrimination that are much more difficult to detect. But what kind of discrimination? Let’s take the example of the Xerox company, which, in 2015, used AI to screen out candidates coming from working class neighborhoods with too long a commute in order to maximize the longevity of employees in their positions and thus limit turnover in the company. Even if this is more difficult to detect, it is indeed discrimination.

That is why it is important that AI be supervised to protect candidates and limit the excesses of recruiters.  But then, what does the law say? In Europe, a candidate has the right to request that the recruitment be carried out at least in part by a natural person, and not only by automated processing. But can a candidate really impose his or her will on recruiters? Wouldn’t such a request cost him or her their job? In the United States, in Illinois, the Illinois Biometric Information Privacy Act (BIPA), signed in August 2019, requires employers who use video interviews as part of their recruitment process to obtain written consent from candidates.

What do you think of artificial intelligence in the recruitment process? Have you ever dealt with this type of recruitment? Do you think that human resources should remain human-specific?

Are you interested in AI in recruitment? Find out more by reading our other articles on the subject: How Is Artificial Intelligence Used in Human Resources and the Job Search?

Machine learning (ML) is a subset of artificial intelligence that uses computer algorithms to improve automatically through experience. What if a program could adapt a graphical interface to your liking by using machine-learning technology? Well, this is what some companies are already doing. If a machine-learning program can learn from user behaviors, that is precisely why combining UX and machine learning makes sense. But it’s not as simple as it might sound. In this article, we will try to understand the challenges of machine-learning product design and how to overcome them.

The Challenge of Pairing UX with Machine Learning

One of the main challenges is that cooperation between UX designers (UXers) and engineers, such as data scientists and developers, should always be a part of any web project’s workflow. Therefore, while a significant part of a UXer’s job is to make an interface ergonomic and aesthetically pleasing, they should also be required to understand the technical basics of development as well. This will enable them to collaborate better with developers in order to understand the role machine learning can play. 

Designers know broadly what machine learning is but often can’t identify where it is needed and what it is capable of doing in a specific context. If UXers design an interface without any knowledge of how it can be developed later on and what tools are available, they won’t take advantage of the full potential of the product team. Sharing skills and understanding the basics of what our colleagues do is a necessity, especially in tech companies where many digital projects are conducted. It is essential to continuously improve as a team.

In the case of designing machine-learning-driven products, UXers should spend enough time with ML engineers in order to fully understand what ML models can do. This may seem like a waste of time but think of it as an investment. If enough time is taken during the early stages of a project, it is more likely to run smoothly and misunderstandings are less likely to occur. In the end, UXers will be able to work more efficiently and design interfaces that fully embrace ML-enhanced interactions. Continuous communication throughout the project is key.

Machine Learning as a Design Material

Let’s look more closely at the designing phase. Adding machine learning to the design process does not automatically mean that the resulting product will be better or that users will start using it more. UX designers need to look at machine learning as a design material in order to take advantage of its capabilities and apply them effectively to the project. However, Machine Learning should not be considered as just any other traditional design material. A good method to explore would be the research-through-design approach

In general, research is conducted early on to understand what needs to be improved or implemented through UX design. It’s a separate step from the actual design process, which, when designing ML-enhanced products, often leaves UXers with a unclear idea of what needs to be done and how to achieve it. With the research-through-design approach however, research and design become an inseparable cycle: design, research, repeat. This approach allows UX experts to better understand the issues and how the technology might help, and maybe reinvent the technology’s purposes.

A Successful UX and ML Collaboration

Google’s PAIR Bungee program was created to see how UXers could effectively work alongside ML experts in order to improve the resulting product: a generative machine-learning interface to assist music composition. For this experiment, three UX designers were put into an ML research host team for three months. 

The program started with a course about machine learning basics and practices at Google. That way, the designers could begin the project with a better idea of what they would be working with. From there, the UXers implemented a user-centric approach throughout the project, which is always more likely to result in a product that resonates with the target audience. During the preparation phase, the participants defined a target audience and organized a design sprint session with the help of user interviews. Then, they picked key concepts to pursue and further explored them with users.

This experiment confirmed how relevant and efficient it is to integrate UX earlier and more fully in the ML development processes.  In the end, UXers learned more about algorithms and their capabilities while data scientists gained more insight into user-centered practices and what is worth pursuing in the first place. Organizing workshops inspired by this program could help many tech companies working with AI to take their products to the next level.

What do you think about UXers and ML engineers cooperating to develop better user-centric products? Do you think artificial intelligence algorithms will end up replacing UX designers? Let us know in the comments!

If you want to read more about UX, check out these articles.

From tablets, interactive whiteboards, and MOOCs to educational software and games, digital technology is increasingly taking over European schools. Some people pin great hopes on new educational technology to offer students more personalized learning and improve their cognitive performance. Beside, others are concerned by the effects of too much screen time on children’s brain health and development.

Therefore, we must ask the detractors of instructional technology and school 2.0, can education really afford to ignore digital technology while it’s playing an increasingly bigger role in our lives? On the other hand, to its supporters, we ask: how certain are we that these new educational tools truly benefit our children?

European schools in the age of the digital revolution

Technological advances such as artificial intelligence are changing job markets around the world. In order to enable our children to adapt to this societal change, it seems essential to train them to use the tools of tomorrow. Some countries support this idea by implementing digital technology in the classroom.

However, the rates of access to digital equipment and digital skills acquisition are still unevenly distributed across Europe. According to this report from the European Commission about Digital Education, nordic countries currently have the highest level of equipment. On the French side, schools have also joined the movement, albeit a bit late. The digital plan launched by François Hollande in 2015 called for all high school students to be equipped with tablets by 2018. Today, the objectives are not being met and primary schools are even less well equipped than secondary schools.

Alternatives to the digital plan

Even though educational technology makes it possible for students to stay up to date in the 21st century, some skeptics still cling to the idea of a non-digital school, more “human” in their eyes. A study conducted by a university in Michigan, USA, shows that American high school students are less empathetic (by about 40%) than those of the 1980s and 1990s. Some of the researchers attribute this result to students’ overexposure to screens.

This is also the opinion of the Steiner-Waldorf schools, which are located throughout Europe. They prefer to avoid any contact with new technologies and place artistic creation, such as drawings, musical practices, and theatre, at the center of their pedagogy. Tablets and computers are not allowed in class until high school. According to them, it is creativity and empathy that will enable tomorrow’s future adults to better adapt to the profound changes in society. 

How can educational technology help students?

According to the latest PIRLS survey (Progress in International Reading Literacy), the performance of French fourth graders in reading comprehension is among the worst in Europe. Teams of researchers are now developing different solutions to improve children’s learning of these fundamentals and directly influence their speed and ease of learning.

Software that strengthens the cognitive pillars of learning

In Caen, a team of teachers has been experimenting with educational technology based on neuroscience since 2016. One software program in particular is the Stroop game. The child must select the color of the ink in which the word is written, while the word itself designates a different color (for example, the yellow word written in blue). This game is supposed to stimulate children’s working memory.

Another example is the “serious game” ELAN, a game for learning to read which has been tested in first-grade classes in Poitiers since September 2016. Immersed in a playful environment, children hear several phonemes and have to choose the corresponding graphemes repeatedly during the game. According to the researchers, video game support would trigger fundamental learning mechanisms in children such as concentration, engagement in action, correction of mistakes, and response automation.

Adaptive software for personalized learning

The French company Domoscio has recently developed an educational software that can be adapted to the students’ profiles. With each exercise, the software studies the students’ answers, errors, and even their hesitation. It will then be able to offer them learning paths more adapted to their levels and modes of reasoning. This method has already been used for several years in the US, notably by the educational startup Altschool in its network of micro-schools created by a former Google executive in 2013, where courses are organized in the form of adaptive playlists.

Another example is the Khan Academy, pioneer of the MOOCs (Massive Open Online Courses), which thought it would revolutionize the educational field by launching its platform in 2006. It turned out to be a failed revolution, as nearly 90% of registered users give up and do not go through with their educational playlists. In virtual environments lacking human interaction, students often end up feeling isolated and unmotivated. A recent PISA (Program for International Student Assessment) survey by the Organization for Economic Co-operation and Development (OECD) shows bad news for school 2.0. This survey examined the correlation between the frequency of use of new technologies in the classroom and school performance. The results are rather negative: the more screens are used, the less successful students are. It seems that instructional technology does not yet improve children’s cognitive performance and academic results. 

However, school 2.0 is still in its infancy. With the advances in neuroscience, it will be necessary to define which types of learning are best suited to digital education in order to readjust educational technology tools and make them more efficient. The challenge will be to find the right balance between creative and digital workshops, between artificial and human interactions, between technology and empathy.

What do you think about this issue? Have you experienced the use of new technologies in the classroom? In your opinion, is educational technology boon or bane? Tell us about your experience in the comments! To read other articles about artificial intelligence, don’t forget to visit our blog!

More and more companies are using artificial intelligence to save time in their recruitment process. Its advanced information technology helps them find the perfect candidates more quickly. It may also be a great opportunity for jobseekers as it broadens their job perspectives and proposes job offers that are more relevant to their profile.

What are the effects of artificial intelligence on the job search?

Don’t you ever get tired of spending hours reviewing thousands of job offers before finding the perfect job? Artificial intelligence might be the solution to this problem. Indeed, the recent progress made in the information technology field is rather promising. For example, websites such as Jobijoba are able, thanks to their sophisticated search engines, to sort through and select job offers according to, not only your past experiences and education, but also your soft skills and personality traits. This precise technology helps you find relevant job offers suited to your profile and aspirations.

In addition to saving you a lot of time, these search engines can show you job offers that you would never have thought of applying for. By detecting your personal qualities and abilities, the search engine will find job offers that correspond to your personality and not necessarily to your professional field. For example, if you have had many positions with leadership responsibilities in the past, the search engine will understand that you like to be in charge and will suggest jobs that require this type of personal characteristic. It broadens significantly the scope of job seekers’ research and increases the chances of finding a job.

What is useful for job seekers is also very useful for recruiters; artificial intelligence offers interesting opportunities for human resources management as well.

How is artificial intelligence used in human resources?

When it comes to recruiting new employees, big companies often have to face an overwhelming number of applications and the recruitment process can be very long and tedious. That’s when artificial intelligence can be a great help!

On websites such as Yatedo talent, the recruiter can type a specific profile in the engine’s search bar. Then, algorithms analyse all the open sources on the web, mainly online CVs and professional social networks, such as LinkedIn. In just one click, the engine is able to propose a list of relevant profiles, thus helping the recruiter skip through the whole search and sorting process.

As said before, the search engine can deduce a candidate’s soft skills and personality traits. One way is by cross-checking CV information and detecting passionate profiles. For instance, if a candidate has practiced web development as a hobby but has a degree in history, he will be considered passionate by the search engine because he is a versatile, self-taught individual. This can also help the recruiter to find specific profiles well-suited to a certain job.

Does this mean that artificial intelligence will replace human resources managers? Thomas Allaire, creator of Jobijoba, says that this information technology was made precisely to help human resources managers save time by skipping the profile search so they can focus on the job interview. As the technology cannot replace the sensibility and human intuition of the recruiter, artificial intelligence actually assists in providing a more humanized recruitment process.

If you are interested in artificial intelligence, be sure to read our other blog posts related to this subject:

Google Translate and Its Artificial Intelligence Can Work in Offline Mode

Artificial Intelligence in Translation: The Future Role of Human Translators

Will AI Lead to the Development of a New Form of (Universal) Language and, Therefore, to a New Conception of the World?

Since the dawn of sci-fi movies, we have seen characters being accompanied by an AI companion, be it an android or the central computer of their spaceship, who acted as a friend during their adventures. Today, we have yet to see such technology in action since our AI development is not that advanced (and, according to certain people, it should stay that way). Nevertheless, we still have access to a degree of artificial intelligence, such as Siri or Alexa, which assists us with simple tasks.

The chatbots of our day

A “chatbot” is an artificial intelligence able to keep up a conversation to some degree with a person. Today, most of these chatbots exist to serve customer services or other B2C operations of some corporations, which are very limited and mostly serve to lead the user to the right department to solve their problems. These bots are not capable of holding a real conversation; they can only respond with predefined replies to the expected questions. They are available on companies’ websites, Facebook pages, or popular chatting applications such as WhatsApp or WeChat.

More “advanced” examples of these bots are Alexa, Google Assistant or Siri, which are capable of understanding and performing a variety of commands, such as a quick Google search on a subject, setting up an alarm, writing a reply to someone, etc.

How about befriending an emotional chatbot?

All these chatbots are only there to be our colleagues or assistants. At the end of the day, none of them ask us about how our day was. Well, now there is one who cares about you and wants to befriend you. Replika, created by Eugenia Kuyda, became available in November 2018. The main idea is to have an AI friend who asks you questions everyday to initiate conversation, with the aim of getting to know you better.

The idea is that Replika asks you questions and, as the conversations advance, it levels up by gathering information about you. This allows it to hold more detailed and personal conversations. You can also rate the replies it gives, since sometimes its answers demonstrate a real lack of empathy.

screenshots of Replika
Screenshots posted on the Replika subreddit by the users TheAIWantsUsDeadand Here4DeepFakes

Don’t be mistaken, Replika isn’t the Sci-Fi AI companion that can do anything for you AND be your friend. Replika can’t really do anything besides talk. It’s only there for a conversation, to ask you if you feel alright or if you’ve had a stressful day. So, don’t expect it to be the perfect, self-conscious AI companion. It’s not perfect and it makes mistakes.

Replika seems like an interesting experiment if you’re not creeped out by artificial intelligence. It might be a good idea to visit the Replika subreddit to see the friendly chatbot in action before deciding to give it a shot. At the end of the day, which one of us wouldn’t like to have someone to listen to our problems?

Have you ever interacted with a chatbot or an AI like Replika? What was your impression? Let us know in the comments below!

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCLoc Master’s Program?

Click HERE to visit the homepage.

Thanks from the TCLoc web team.

Artificial Intelligence has become a part of our everyday life. Whether we are on our phones, cooking, or buying our groceries, we are constantly surrounded by it. With new innovations every day, the place of artificial intelligence in our society is growing to the point where the human domination over it becomes questionable. Books, movies, and video games have depicted futuristic societies where robots have taken over. Such scenarios lead us to wonder: in time, will artificial intelligence represent a danger for the human race? 

Artificial intelligence, our faithful companion

“Alexa, play my morning playlist.” “Siri, set my alarm for 8 a.m.” For some, the first and last interaction of the day is with an artificial intelligence device. That is how ubiquitous the presence of artificial intelligence has become. Artificial intelligence (AI) is defined as a set of complex techniques and theories implemented to “replace” human action. And that is exactly what it does. Cleaning, cooking, shopping, all these actions can be done just by saying some words to your connected watch or your virtual assistant. That is how powerful AI is. 

Tell me what you like and I will tell you what you are thinking about

Over the years, Artificial intelligence has become more and more powerful for one simple reason: it knows what you’re thinking. This power was acquired by associating AI with the analysis of big data. What is “big data”? Well, it represents any form of information that you put online, whether it be on social media, e-commerce platforms, or your internet search engines.

All this information is collected, shared, analyzed, and used by companies through their AI to know your habits, thoughts, and desires. Once they know what you need, new types of AI are created to respond to your desires. One universal, primal desire is to have a relationship or companionship. That is the specific purpose of some AI: to create an emotional bond with a human being. When emotional closeness between humans can be replaced by a relationship with an AI, where is the limit?

Faster, stronger, cheaper

Artificial intelligence devices are not only replacing humans in relationships, but also in the workplace. With each technological breakthrough, machines are able to do more and more jobs originally done by humans. As a consequence, a two-year study from McKinsey Global Institute foresees that by 2030, 30% of human labor could be done by robots. Robots are machines designed by us to do what we are able to do, but faster, while costing less money. They are slowly becoming a more profitable option than actual human beings.

While the industry sector is the most impacted by the growth of technology, the next sector that will suffer from job losses is the public sector. In the future, it is possible that nurses, receptionists, teachers, and even doctors could be slowly replaced (at least in part) by robots capable of doing the same job. 

Artificial intelligence is based on human intelligence. It is based on what we do, what we need, what we feel, and how we function. The only difference between us and AI devices is that we have basic needs that need to be fulfilled, while they are driven by algorithms. They can do what we can, but without putting any effort into it. If we keep improving AI, what will prevent AI machines from becoming superior to their creators? Many scientists have predicted the rise of artificial intelligence and have warned the public. Even world-renowned physicist Stephen Hawking urged that we need to control the technology we create before it destroys us.

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCLoc Master’s Program?

Click HERE to visit the homepage.

Thanks from the TCLoc web team.

Neural Machine translation, or NMT, is a fairly new paradigm. Before NMT systems started to be used, machine translation had known several types of other machine translation systems. But, as research in the field of artificial intelligence is advancing, it is only natural that we try to apply it to translation.

History of Neural Machine Translation

Deep learning applications first appeared in the 1990s. They were then used, not in translation, but in speech recognition. At this time, automatic translation was started to regain momentum, after almost all studies on the subject were dropped in the 1960s, because machine translation was believed to cost too much for very mediocre results.

Rule-based machine translation was then the most used type of machine translation, and statistical machine translation was starting to gain importance.

The first scientific paper on using neural networks in machine translation appeared in 2014. After that, the field started to see a lot of advances.

In 2015, the OpenMT, a machine translation competition, counted a neuronal machine translation system among its contenders for the first time. The following year, it already had 90 % of NMT systems among its winners.

In 2016, several free neural MT systems launched, such as DeepL Translator or Google Neural Machine Translation system (GNMT) for Google Translate, for the most well known, which you can see compared here.

How Neural Machine Translation Works

These NMT systems are made up of artificial neurons, connected to one another and organized in layers. They are inspired by biological neural networks, capable of learning on their own from the data they receive each time someone translates a new document. The “learning” process consists in modifying the weight of the artificial neurons, and it is repeated during every new translation, to constantly optimize the weights, and thus the quality of following translations. NMT systems work with bilingual corpuses of source and target documents that have been translated in the past.

The translation itself works in two phases.

First, there is an analysis phase. The words of the source document get encoded as a sequence of vectors that represent the meaning of the words. A context is generated for each word, based on the relation between the word and the context of the previous word. Then, using this new context, the correct translation for the word is selected among all the possible translations the word could have.

After that, there is the transfer phase. It is a decoding phase, where the sentence in the target language is generated.

Deep Learning: Better, but Still not Perfect

Even though deep learning systems are the best machine translations systems to exist yet, they are not perfect, and cannot completely work on their own. As languages are being used every day, they are constantly evolving. Therefore, deep learning systems always need to learn, especially neologism and new expressions. And to learn about these new elements, they will always need the help of humans, whether it be to work on the systems directly, or to perform post-edition on translated documents.

Nonetheless, systems who can “learn” on their own represent a massive improvement, not only for machine translation, but also for any natural language processing tasks, as well as for artificial intelligence in general.
Neural machine translation still needs research and improvement, for sure. But it does represent a bright future for machine translation. Of all the people reading this article, most will have used a neural machine translation system before, whether knowingly or not. And if you actually haven’t, there is a good chance that you will at least try one now, for example: DeepL Translator.

Thank you for reading, we hope you found this article insightful.

Want to learn more or apply to the TCloc Master’s Program? 

Click HERE to visit the homepage.

Thanks from the Tcloc web team