The Challenge of Neural MT: Part II

The Challenge of Neural MT: Part II
Tune in for the second in our three-part series on Neural Machine Translation. Hear what some of the language industry's leading experts on Neural MT think about the promises, limitations--and pitfalls--of this revolutionary new technology. This podcast uses the Flow and Disorder sound by copyc4t from freesound.
Download episode
Show transcript

Transcript

Speaker Transcript
Renato I am Renato Beninatto.
Michael And I am Michael Stevens. In Part 1 of this series, we looked at the general concept of Neural Machine Translation. We discussed the difference between rules-based machine translation, phrase-based statistical machine translation, and how neural MT takes this approach to another level.
Alon My name is Alon Lavie, I am a senior manager for client-side at Amazon and currently lead the Amazon machine translation research and development group located, primarily, here in Pittsburgh, Pennsylvania.

I was a professor at Carnegie Mellon for 20 years where my research focused, primarily, on machine translation and natural language processing more broadly. Then, I had a technology company that I spun out of CMU in 2009 for machine translation, particularly for enterprise machine translation by the name of Safaba and that company was acquired by Amazon in summer 2015, and now we are part of Amazon where I’m leading machine translation research and development.
Renato Based on his experience in MT, both from an academic and an enterprise perspective, we asked Alon about the evolution of the different systems.
Alon So, very interesting things there. I think machine translation, if you look historically at how the technology and the research has evolved from the early days where rule based translation kind of ruled the game and then we moved with the data revolution to statistical models, and simple and more complex statistical models that are trained on data and now to this neural approach is … it’s always been more evolutionary than revolutionary in the sense that it never really was the case that something swept in, was a complete solution to all of our problems or a huge solution to things that were not solvable before and then, like, within six months replaced what was there before. It was always much more evolutionary than revolutionary, and I think that is, in fact, true and will be true also for the neural technology.

It’s a significant advancement; it will take time to actually make this work, particularly in enterprise settings in a scalable way and it will replace, I’m guessing, the current dominant statistical machine translation technology over the next two to five years, probably.
Renato So, Michael, we see that technology is improving at a very fast pace. It took decades for machine translation to move from manually inputting dictionaries and grammar rules into computers to looking into large corpora of parallel content to identify patterns and correlations based on the frequency of use of words and sentences.

In the last ten years, Google Translate revolutionized the concept of MT by making a generic version of it available to everyone for free on the web. Now Alon is telling us that neural MT will replace Statistical MT in two to five years.
Michael What took 10 years, now two to five years and, Renato, the computer processors increase their speed, the capacity is increasing exponentially and while all this is happening prices are slashed. So, we have the opportunity to deal with large volumes of content in an affordable way.

This is what makes neural MT possible today, affordable and fast computing capacity. But, Alon Lavie expressed some reservations about its application in the enterprise environment.
Alon One of them is that fundamentally we don’t have, yet, a very good control or understanding of what these systems are actually learning. So, even though the training process of statistical MT is significantly more complex and involves several, a significant pipeline of components and steps, particularly in enterprise settings where we’re not yet translating simple sentences or strings but we’re translating structured content, documents, material that needs to be, often, MT used in the context of machine translation plus post editing to get, basically, a technology deeply integrated into generating human quality translations.

It’s actually quite important to have significant control over the translation process of what the machine translations are doing, being able to integrate deeply terminology, for example, for translation to be generated in a particular way in a deep integrated fashion, be able to handle complex structure documents like web pages, Microsoft Office documents, anything coming, today, from content management systems.

So, it’s a much more complex problem than just inputting a text sentence on a UI and getting the translation for that in another language in simple text format. And those kinds of things are much more difficult at this point to do with neural MT. I don’t think those are necessarily the biggest challenges in terms of the technology, that those things are solvable, but it will take some time.
Renato We heard from several people that one of the biggest barriers for the application of neural MT in professional environments is that it makes mistakes.
Alon On the other hand, there are significant challenges when we’re doing that because we don’t actually have any real control over what those internal representations that are being learned look like and, consequently, the technology the way it is, at least at this point in time, it makes very, very strange types of mistakes in terms of translation because it’s not a direct mapping between the source language and the target language in terms of the words and the sequences of words, then it can make very, very significant mistakes in terms of the actual meaning.

So, we have much less control over the mistakes that these systems are doing. They are currently very exciting because of that approach and the fact that it has the potential to solving the many, many language translation problems in a very interesting way and because they have been, actually, quite successful, at least from what we’ve seen so far in generating translation output that is much more fluent in the target language because of the fundamentals of this approach. But, when it makes mistakes, particularly I would say when we need accurate translation in very specific domains and things like that, it can make very, very bad mistakes, and those are very hard to actually detect and predict because of the way the architecture of these systems currently is designed.
Diane Hi, I’m Diane O’Reilly. I head up global sales and marketing for Iconic Translation Machines. I think one of the key points is that there’s a lot of hype about machine translation at the moment and as you mentioned yourselves, there is lots of media that are publishing, a lot of information about lots of different organizations who are thinking to actually be doing something with neural machine translation but if we actually read between the lines, in a lot of instances there isn’t actually anything of real substance in a lot of those cases.

But there are a few organizations who are actually doing some testing on it right now and I think one of the key points is that at the moment it’s testing. So, at the moment, what we’ve been doing is that systematic comparison between the current system and a neural machine translation system.

So, in terms of the timeframe of when these systems are going to come online or how they’re going to come online, I think there are a lot of different factors that have to be taken into consideration. So, we’ve just been having a very technical discussion, so is it going to make it better? Yes, it will make it better; it’s not going to be a silver bullet; it’s not going to sweep the board; we’re still going to be using statistical for some languages, and maybe it’s just components that will be integrated.

But, I think some of the other key things that people are going to have to think about in integrating neural machine translation into newer systems that are coming online are more from the practical point of view. So, I think there is a lot of factors that need to be taken into consideration before organizations can just say “right, okay, here…” we have lots of systems that are out there. And at the moment I think it’s probably a bit too early to comment on exactly how or when everything is going to hit the market but, certainly, the key thing that we’re trying to do at the moment is to identify some neural components that could be integrated into the current systems that we have.
Michael I asked Diane for an example of the practical application of neural MT.
Diane Recently, one of the enquiries we had was compliance monitoring for a financial institution, which is fantastic because they wouldn’t have considered using machine translation before but this is where we’re starting to see new use cases and I think with the new technologies that are coming out.

So, neural is going to help with some of those languages that are a little bit more complicated right now for machine translation, and it’s just going to make it far more accessible down the line and we’re going to see a lot more new applications of it within enterprises which is where I think it’s going to be really exciting because I think it’s going to actually explode because there are going to be so many more use cases for it.
Michael It’s refreshing to see that there are some applications already on the radar for neural MT, but I wonder about the language pairs.
Renato I think we had a conversation with Marco Trombetti, from MateCAT about this topic.
Marco So, I tell you. Google, a few weeks ago, released neural MT for Chinese/English, and we could test it. And we had also been testing other solutions from Edinburgh that has been very, very powerful and won the last two competitions. And what we noticed is that in 70% of the cases a human will say that the neural machine translation is better than the one before. Better or equal. So, it’s like they’re equal for most of the time, but neural is two times better than when the old technology is winning.

So, there are some cases where the old technology is still winning, 30% of cases. There is maybe 20% of cases where they are equal and the remaining 50 is where neural is winning. So, total will be 70% of the time better or equal, neural. So, in terms of quality from a human judgement it is better.

And now, I think, there is interest because it’s just a little bit better but the problem is the efficiency. Now you convert human time, developer time, into quality. So, now, the engineer has a lot more time to think about how to improve it because it doesn’t have the maintenance of that one million lines of code. It has the maintenance of 250, and all this time to plan how to improve it. So, the future looks better for this.
Renato What I’m hearing is that neural machine translation is great from the technology side but not necessarily from the language side. So, when we look at rules and statistical-based machine translation, we see that there are some languages that are not good at all, for machine translation. We know that Finnish and traditional Chinese, Japanese, are very hard to do. With neural machine translation, will there be significant improvements in these languages that are hard to do with statistical and rules-based?
Marco Yes, some of those languages will be improved just because they were different, no-one worked out to learn the machine how to learn. So, since no-one was working on them, neural will do the job for the human. So, they will get better in some of those languages but, also, what is very interesting is the new way of working allows us, in the future, to do new things, for example.

So, we know that in translation, translation is one of the most complex things for a machine to learn. It’s a very compressed way how we express and meaning, so we have something in mind, we convert it into words and then you understand what I mean by having a lot of references outside the sentences I said. You have links with the knowledge, the experience you have.

So, we cannot think that in the future we will still teach a machine just on written text. So, we should feed the machine with videos, images, the entire experience of the human being head if we want to perform at that quality. So, with neural it becomes easy to say “okay, here is YouTube, take everything that you see in YouTube and classify that information and connect this information with this text.” And so you can start to distinguish and do the migration of words, of meaning by the fact those represent two different images, for example, and the machine is able to learn those things without any effort from the developer, and that creates the possibility of feeding the machine with data that were not possible before.
Michael One of the claims to fame of neural MT is that it creates its own interlingua and can translate content for languages without parallel corpora. So, theoretically, with very little input, neural MT could translate from Latvian into Swahili—your favorite language pair there, Renato.

I asked Mike Dillinger, from LinkedIn, if this feature would bring benefits for the long-tail languages.
Mike Definitely. So, if you think of projects like Translators without Borders where they have to try to do some sort of machine translation or add human translation with very long-tail languages, the current approaches, which require lots, and lots, and lots of data just don’t apply. So, you have to do something that’s kind of rule-based with some sort of translation memory and make do with that.

The other thing that we can’t do at the moment is have the translators provide more general feedback. For example, at eBay when we were translating things into Brazilian Portuguese we found that the MT systems couldn’t handle the diminutive suffix. So, there is plenty of data to talk about “capa” or a case for your iPhone but many people described it as “capinha”, a little case, for your iPhone. And in that case the MT systems just fell apart because it was “capinha” was considered an unknown word and then the system didn’t know what to do with it and the translation quality really suffered.

It would be really great if we could have something like a human translator say “hey translation system, “inha”, the suffix, is a suffix. And if you can’t find it with a suffix then try looking for the word without the suffix. So, these general rules that leverage the human translator’s expertise are the kinds of things that, as far as I know, we can’t actually use in MT systems today.
Michael Renato, I recently heard that creating neural Machine Translation is very similar to raising kids; you give a lot of know-how and data, but you have no idea what they’ll do with it.
Renato That’s a great metaphor! The fact is that neural Machine translation has left the realm of fantasy and academia and has moved into the realm of reality.
Michael Yes, and so now we’re able to look at what is the impact on the language industry.
Renato You know that I am an optimist. As I mentioned in our previous episodes, machine translation—either statistical or rules-based—already handles more words in a day than all human translators process in one year. For me, the real question is how we harness the power of machine translation to make the translator more productive.
Michael In the next episode of Globally Speaking, we are going to talk about humans and machines working together. We are also going to address practical concerns of buyers and suppliers, like confidentiality and pricing. You don’t want to miss this one.
Renato Globally Speaking Radio is produced by Burns360. You can subscribe to Globally Speaking on iTunes or wherever you get your podcasts.

You should also check out our archives at [GloballySpeakingRadio.com](), which has every past episode, including transcripts. You can follow us on Twitter and on Facebook, and feel free to share ideas for shows with us.

Thanks for listening.

End of conversation

Mike Dillinger

Mike Dillinger is former President of the Association for Machine Translation for the Americas, and Manager, Taxonomy Team and Machine Translation at LinkedIn.

Marco Trombetti

Marco Trombetti is tech entrepreneur in the language industry and co-founder, Pi Campus.

Alon Lavie

Alon Lavie is Senior Manager at Amazon and head of the Amazon Machine Translation Research and Development Group.

Diane O'Reilly

Diane O’Reilly is head of Global Sales and Marketing for Iconic Translation Machines.

Stay Tuned

Subscribe to receive notifications about new episodes

Play episode
0:00
0:00