Techno Blender
Digitally Yours.

AI Won’t Replace Translators. But it can help them.

0 33


Opinion

It’s 1960 all over again

A hand holding up bar. Robot and human are at both extrimities, perfectly balanced.
Image from Pixabay

In a recent study, the University of Pennsylvania and OpenAI investigated the potential impact of large language models (LLM), such as GPT models, on various jobs.

GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (Eloundou et al., 2023)

Their main finding is that 19% of the US workforce may see at least 50% of their tasks impacted.

Some jobs are much more likely to be impacted than others.

Translator and interpreter jobs are among the most exposed.

But “exposed” shouldn’t be interpreted as “threatened”.

I saw this study being misinterpreted on social media. The authors never wrote in their study that AI/LLM would replace, make vulnerable, or even exterminate some jobs.

Machine translation has seen many breakthroughs in its 70 years of existence. The concept of machines replacing human translators has been a topic of prediction and discussion from the inception of computer science to the rise of the Internet.

Translator jobs are very safe for many more decades. The replacement of human translators with AI won’t happen.

“the resulting literary style [of the automation of translation] would be atrocious and fuller of ‘howlers’ and false values than the worst that any human translator produces”

“translation is an art; something which at every step involves personal choice between uncodifiable alternatives; not merely direct substitutions of equated sets of symbols but choices of values dependent for their soundness on the whole antecedent education and personality of the translator. “

J.E. Holmström

This was written by J.E. Holmström in a report on scientific and technical dictionaries for UNESCO, in 1949. He was very skeptical about the possibility of having a fully automated translation.

Holmström’s comment was made several years before the very first prototype of a machine translation system was released by IBM and Georgetown University, in 1954.

The results were impressive at that time when computer science was still in its infancy.

People and MT research sponsors believed that fully automatic translation was reachable within a few years.

The growing excitement for machine translation was reinforced by the arrival of more advanced computers and more accessible programming languages.

Some would compare this context to today’s context with GPUs and AI being more and more powerful and accessible. But I think this is actually nothing compared to how revolutionary the first computers were.

Nonetheless, translators started to worry about their jobs for the very first time because of technology.

It took almost a decade to realize that machine translation won’t be as good as hoped anytime soon.

Money stopped flowing for machine translation research in 1966. US sponsors at the Automatic Language Processing Advisory Committee (ALPAC) declared that machine translation failed in its ambition.

Note: I think we won’t have an ALPAC moment ever again in machine translation research. Most of the breakthroughs are now made by private companies, and not by public organizations.

Following this event, research in machine translation significantly slowed down.

The systems at that time were all rule-based and extremely complex to set up. Their cost and translation quality were no match for human translators.

After ALPAC, it took several more decades for machine translation to make significant progress, until the rise of statistical methods in the early 1990s.

Again, many believed that statistical machine translation will improve fast, but progress remained very slow again until 10 years ago when deep learning was finally becoming accessible.

The great wave — illlustration
Image from Pixabay

I categorized breakthroughs in machine translation into 4 waves:

  • 1950–1980s: Rule-based
  • 1990s-2010s: Statistical
  • 2010s-2020s(?): Neural sequence-to-sequence
  • 2020s-?: AI with large language models

At the beginning of every wave, excitement for machine translation improvements was outstanding. But it always faded up within a few years. Note: I only witnessed the transition from statistical to neural. But I can tell that when Ilya Sutskever published his paper “Sequence to Sequence Learning with Neural Networks” in 2014, that was a huge event in machine translation research. It is still one of the most cited papers in machine translation research.

It is yet too early to write that the sequence-to-sequence days of machine translation are over.

According to recent studies, the most powerful language models are as good as, or slightly worse, than standard machine translation systems.

For now, the main advantage of large language models is a significant reduction in machine translation costs. Intento reported that ChatGPT currently cost 10 times less than the best online machine translation systems, for similar translation quality.

While language models are promising, they are still very prone to extreme hallucinations and biases. They are also very data-hungry, and thus difficult to train for languages for which data are not available in large quantities.

It will probably take many more years to overcome these lingering issues.

[Holmström’s comments about translation being an art] have been repeated again and again by translators for nearly fifty years, and no doubt they shall be heard again in the next fifty.

John Hutchins (Translation Technology and the Translator, 1997)

Translators still have to repeat this today.

John Hutchins was a visionary.

Whatever technology you used for machine translation, it will never have the education and personality of a translator.

Technology is an ally.

Since deep learning made its way into machine translation, it has been commonly acknowledged that machine translation quality largely improved.

Did it lead to translators losing their jobs?

No.

If we look at some data, we can even see that over the last decade, more translator jobs were created in the UK.

In the US, the number of translators remained stable.

My own prediction is that current AI systems relying on large language models will just be assimilated by the current machine translation workflows. It will significantly speed up translation tasks while reducing costs for translation companies.

Translation of professional quality may even become more accessible than ever before.

If you like this article and would be interested to read the next ones, the best way to support my work is to become a Medium member using this link:

If you are already a member and want to support this work, just follow me on Medium.


Opinion

It’s 1960 all over again

A hand holding up bar. Robot and human are at both extrimities, perfectly balanced.
Image from Pixabay

In a recent study, the University of Pennsylvania and OpenAI investigated the potential impact of large language models (LLM), such as GPT models, on various jobs.

GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (Eloundou et al., 2023)

Their main finding is that 19% of the US workforce may see at least 50% of their tasks impacted.

Some jobs are much more likely to be impacted than others.

Translator and interpreter jobs are among the most exposed.

But “exposed” shouldn’t be interpreted as “threatened”.

I saw this study being misinterpreted on social media. The authors never wrote in their study that AI/LLM would replace, make vulnerable, or even exterminate some jobs.

Machine translation has seen many breakthroughs in its 70 years of existence. The concept of machines replacing human translators has been a topic of prediction and discussion from the inception of computer science to the rise of the Internet.

Translator jobs are very safe for many more decades. The replacement of human translators with AI won’t happen.

“the resulting literary style [of the automation of translation] would be atrocious and fuller of ‘howlers’ and false values than the worst that any human translator produces”

“translation is an art; something which at every step involves personal choice between uncodifiable alternatives; not merely direct substitutions of equated sets of symbols but choices of values dependent for their soundness on the whole antecedent education and personality of the translator. “

J.E. Holmström

This was written by J.E. Holmström in a report on scientific and technical dictionaries for UNESCO, in 1949. He was very skeptical about the possibility of having a fully automated translation.

Holmström’s comment was made several years before the very first prototype of a machine translation system was released by IBM and Georgetown University, in 1954.

The results were impressive at that time when computer science was still in its infancy.

People and MT research sponsors believed that fully automatic translation was reachable within a few years.

The growing excitement for machine translation was reinforced by the arrival of more advanced computers and more accessible programming languages.

Some would compare this context to today’s context with GPUs and AI being more and more powerful and accessible. But I think this is actually nothing compared to how revolutionary the first computers were.

Nonetheless, translators started to worry about their jobs for the very first time because of technology.

It took almost a decade to realize that machine translation won’t be as good as hoped anytime soon.

Money stopped flowing for machine translation research in 1966. US sponsors at the Automatic Language Processing Advisory Committee (ALPAC) declared that machine translation failed in its ambition.

Note: I think we won’t have an ALPAC moment ever again in machine translation research. Most of the breakthroughs are now made by private companies, and not by public organizations.

Following this event, research in machine translation significantly slowed down.

The systems at that time were all rule-based and extremely complex to set up. Their cost and translation quality were no match for human translators.

After ALPAC, it took several more decades for machine translation to make significant progress, until the rise of statistical methods in the early 1990s.

Again, many believed that statistical machine translation will improve fast, but progress remained very slow again until 10 years ago when deep learning was finally becoming accessible.

The great wave — illlustration
Image from Pixabay

I categorized breakthroughs in machine translation into 4 waves:

  • 1950–1980s: Rule-based
  • 1990s-2010s: Statistical
  • 2010s-2020s(?): Neural sequence-to-sequence
  • 2020s-?: AI with large language models

At the beginning of every wave, excitement for machine translation improvements was outstanding. But it always faded up within a few years. Note: I only witnessed the transition from statistical to neural. But I can tell that when Ilya Sutskever published his paper “Sequence to Sequence Learning with Neural Networks” in 2014, that was a huge event in machine translation research. It is still one of the most cited papers in machine translation research.

It is yet too early to write that the sequence-to-sequence days of machine translation are over.

According to recent studies, the most powerful language models are as good as, or slightly worse, than standard machine translation systems.

For now, the main advantage of large language models is a significant reduction in machine translation costs. Intento reported that ChatGPT currently cost 10 times less than the best online machine translation systems, for similar translation quality.

While language models are promising, they are still very prone to extreme hallucinations and biases. They are also very data-hungry, and thus difficult to train for languages for which data are not available in large quantities.

It will probably take many more years to overcome these lingering issues.

[Holmström’s comments about translation being an art] have been repeated again and again by translators for nearly fifty years, and no doubt they shall be heard again in the next fifty.

John Hutchins (Translation Technology and the Translator, 1997)

Translators still have to repeat this today.

John Hutchins was a visionary.

Whatever technology you used for machine translation, it will never have the education and personality of a translator.

Technology is an ally.

Since deep learning made its way into machine translation, it has been commonly acknowledged that machine translation quality largely improved.

Did it lead to translators losing their jobs?

No.

If we look at some data, we can even see that over the last decade, more translator jobs were created in the UK.

In the US, the number of translators remained stable.

My own prediction is that current AI systems relying on large language models will just be assimilated by the current machine translation workflows. It will significantly speed up translation tasks while reducing costs for translation companies.

Translation of professional quality may even become more accessible than ever before.

If you like this article and would be interested to read the next ones, the best way to support my work is to become a Medium member using this link:

If you are already a member and want to support this work, just follow me on Medium.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment