Subscribe to receive notifications about new episodes
|Renato||I am Renato Beninatto.|
|Michael||And I am Michael Stevens.
Renato, I know one of your pet peeves is the automatic response that translators and LSPs have when there is the topic of Machine Translation brought up. You get really angry when these people say that they don’t use the technology because of confidentiality.
|Renato||Don’t get me started, Michael. What really irritates me is that this is like an automatic reaction. Nobody thinks about the real implications. It is just one of those things that people repeat over and over and over again, and when you challenge them, they don’t know why they do it. They don’t know what is behind this resistance.
So before we get deeper into the discussion of the implications of Neural MT, I thought it would be good to talk to someone who has done his homework and looked into the issue from the perspective of the professional translator.
|Jost||So, my name is Jost Zetzsche. I’m an English to German translator. I’m interested in translation technology and, for that purpose, I write a newsletter that comes out about once a month called the Translators Toolbox. And I wrote a book about translation technology as well, an eBook that is available online. Myself, I am very untechnical. I’m just really interested in finding ways to make my translation work more productive.
I think confidentiality in itself is something that you need to look at, and you need to take serious because your clients take confidentiality serious. You have contracts with your clients that forbid you to do certain things or that make you want to follow certain rules as far as confidentiality goes. But, I think what’s happening, especially with the generic online machine translation program such as Google Translate or Microsoft Bing Translator, is that most of us have actually never looked into the details of what confidentiality means when the data is being transferred to Google.
And I’m talking about Google specifically now. In Google’s case what is happening is that if you go to their website to translate on Google.com and punch in a sentence there and get a translation, now, that data might actually be used by Google. Not in a translation memory kind of way but it might be used for any kind of purpose. That’s what Google is saying in their contract with you. But, if you use Google Translate through an API, that means if you use Trados or Déjà vu or Across or whatever kind of tool you’re using and you connect to the machine translation engine through your translation environment tool, then in the case of Google, Google is specifically saying “we are not going to use that data”. It makes it very clear.
And so that is something, I think, that is not particularly well-known and I used the term scaremongering in my last newsletter when I talked about that. I think that fear that translators have to use Google Translate, in particular, is sort of being used by other people in the market to make them believe that it is indeed not confidential if you use Google Translate in any kind of way, and so you shouldn’t use it. And that’s just not quite correct.
|Michael||And Chris Wendt from Microsoft, who was with us in the last episode, he also had something to say on this topic.|
|Chris||So those are the people who send their translation jobs to their supplier via email!
Even if somebody like Google, Microsoft or one of the bigger providers wanted to, how would we find any relevant information in this? So, number one, we don’t really care about what people translate, we care about us providing a good job, maybe a good job at doing it, maybe better than our competitors would do that. So, I mean, I’m talking about Microsoft here; by default we use a small portion of the translation traffic for quality improvement. We take a random set of sentences from all of the traffic that we translate which is many billions of words every day, we take a small sliver of it and use it for quality improvement; basically, we add it to all our test sites so that we can hill climb on what people actually translate.
So, that’s our interest. And that’s our only interest. From all those billions of words, we can’t track it to the individual originator. Generally, we don’t know who the customer is. If you, say, subscribe to the translator API you enter your name and your credit card somewhere on the billing and provisioning end which is completely separate from the provider end, and that is intentionally designed that way because that’s PII which is secured, even within the company. So, even if I wanted to find out who that originator is, I couldn’t.
|Renato||You don’t’ have a bunch of people in India and China just going through all the translation segments that go through Microsoft Translator trying to identify company secrets and medical advances and new patents and things like that, that you can extract, and reuse and sell it on the black market?|
|Chris||Clearly we don’t! We, we just like any company of this size, have a strict regimen on what data can be shared with whom, even company internally. So, we wouldn’t be allowed to share any data even if we had, if we knew about it, say, from a gaming company with the Xbox Division. We couldn’t. Number one, we wouldn’t know that it is a gaming company which had submitted that particular translation, and our internal data protection prevents us from sharing data with really anybody, except for the stated purposes that we provide in our privacy statement which is to improve Microsoft Translator.
Plus, we also offer, and I don’t really know about the competitors here, we also offer what we call a no-trace option, that there is no log created of anything. So, there is no permanent record of anything you translated. That is limited to people who have a paid subscription.
|Renato||Very good. And it’s a common and makes sense as a trade-off. If you want it for free you contribute for the improvement, if you want to pay you have your privacy.|
|Chris||But, even if you don’t opt into that, the chance that your material ends up anywhere else is really close to zero, and there is no known case where that happened – as far as I know – with either. I know certainly not for us, I haven’t heard of Google doing…|
|Renato||When you talk to the detractors of machine translation they will often come with these issues “what about the problem; you can be sued, your company, for breach of confidentiality” and I say “give me one case that you know of a company that was sued because of breach of confidentiality,” and I haven’t found any in 30 years in this business!|
|Chris||So, at Microsoft we had a couple of cases where, they say, a confidential product or confidential version leaked a screenshot of a non-released product, leaked due to a human translator, so we have evidence of that happening. Though the other question that you can ask, where do you have your data now, is it on some server in your basement, is it at your language service provider, what kind of security did you apply against, say, hacking on your account?|
|Michael||So, now that we have taken privacy and confidentiality out of the way, and translators can feel confident using machine translation as a productivity tool. What is next?|
|Renato||Well, Michael, I think that the big question is how neural MT will be used. Today, it is mostly an academic exercise with some deployments by Google, Microsoft, Baidu, and most recently Naver, in Korea. But you had an insightful conversation with someone who looks at machine translation from all angles.|
|Olga||I’m Olga Beregovaya. I manage technology solutions at WeLocalize, one of the largest LSPs in the world, and I’m also President of the American Association of Machine Translation.|
|Michael||Awesome. What does the president of the Association for Machine Translation in the Americas do?|
|Olga||Well, it’s mostly about organizing events and keeping the visibility of machine translation in the Americas at a high level, while keeping the visibility high and making sure that there is awareness of the field, it’s reaching out to academia, it’s maintaining our website, it’s making sure that there are publications in the field that are accessible from our website, and it’s most definitely working on organizing the conferences in the field.|
|Michael||Awesome. So, you are connecting the commercial side to the academic side making sure …|
|Michael||There’s more conversations happening.|
|Olga||Yes, it’s essential that the commercial side and academic side are talking to each other. There is a lot of amazing research coming out of academia but I think it’s also important, and I’ve spoken about it at a couple of events, I think it’s also important that the academia gets input from the commercial users and understands what’s important for the commercial users at the moment, which areas are of utmost interest and which areas matter most because, obviously, academia would be very capable of helping the commercial users and helping driving the commercial adoption if, I believe, if they get the proper input from us commercial users and us commercial developers.
So, I think it’s very synergistic, and I think it’s very important that the dialogue is ongoing. The same applies to government users; again, government users obviously need to be in touch with both academia and commercial users because the three can learn a lot from each other.
|Michael||At some point in your career were you CEO or did you own your own machine translation company?|
|Olga||Yes, I was actually a CEO of ProMT or Promt, however you want to pronounce it, a machine translation company run out of Russia, I was CEO of their US commercial division, and I was driving development of their commercial enterprise server product, and I was driving that adoption of this product in the Americas. Yes, that was a part of my career dedicated to that.|
|Michael||So, you’ve got a number of years of experience and various experience in different groups. These recent announcements related to the neural net, neural machine translation, how does this rank in significance of all time in your opinion?|
|Olga||Well, based on the results that we’re witnessing, it is a significant breakthrough. It’s definitely, I think, very similar to what happened when phrase-based statistical machine translation was introduced and the breakthrough that had signified compared to what was available, the results that were available from rule-based machine translation.
So, I think in order of significance it’s either similar or maybe even more significant because the results that we’re seeing are pretty amazing in terms of fluency and in terms of how natural machine translation sounds. So, I think there was a breakthrough when statistical machine translation was introduced in early 2000s, and I think now we’re witnessing equally significant or maybe even more significant breakthrough.
|Olga||Yes, it is pretty impressive, and I’ve been following comparative studies of neural machine translation and the way it stacks up against current, state of the art statistical machine translation systems, and while I would not support the claims that machine translation with the introduction of neural MT is now on a par with human translation, I still think that’s an overstatement, and I still think there are ways to go, we definitely are witnessing significant improvements in the way machine translation is performing.|
|Renato||As you can see, progress exists and it is fast. What we need to talk about now is what this will mean for the human translators. Even if we wanted to, humans wouldn’t be able to translate all that there is to translate, so machine translation, neural or not, is a reality that is here to stay.|
|Michael||And I like what Mike Dillinger, who was in our first episode, who works with MT at LinkedIn, and when we were talking about this, he reminded us of one of his presentations at a conference.|
|Mike||So in this talk, when I talked about old assumptions, new assumptions, what I was trying to say was the old assumption is that the MT researcher’s main goal is to produce an autonomous translation machine that can do something, or maybe everything, really well without humans. And under the new assumption, what I called the new assumption, we could be working more like the people who develop systems for pilots, you know, fly-by-wire systems, where the main goal is not to create drones, the main goal is to allow pilots to fly more safely, ever more complex airplanes.|
|Renato||Well, this seems to fit very well with the idea that language is dynamic and is constantly changing and defeats one of the main resistance arguments that linguists have is that machine translation cannot replace the human. So, you believe that MT, even neural MT, is more of a support tool than an end in itself.|
|Mike||That’s how I think we should be treating it. So, if the example I like to give is the automotive industry. It took us more than 100 years to start working on self-driving cars. Up until then we had these machines that humans had to drive. I think we should approach MT the same way.|
|Michael||So, this would be more of an advance from going from a manual transmission to an automatic transmission.|
|Mike||The neural MT stuff? Yeah, that remains to be seen. So far, neural MT has given us, essentially, what I see to be incremental improvements. It’s a shiny new technology that holds promise for other kinds of improvements, but so far we’ve seen incremental kinds of things.|
|Renato||Do you think that neural MT that is something that’s here to stay, that it’s already proven that that’s the way to go or is this going to be a new fad like crowd sourcing was 10 years ago where everybody talked about it, everybody wanted it, and then all of a sudden it’s not a viable model for translation at least.|
|Mike||No, neural MT is here to stay. It’s a really significant technical advance. One of the things that’s interesting about it is that it takes much more of the context into consideration than phrase-based based MT and that’s, essentially, where I think most of the improvements we see come from. So, for example, in neural based MT we see much better agreement, noun verb agreement for example, even when they’re split up in different parts of the sentence. I think that’s mostly because neural MT, to compete the translation, takes into consideration the whole context of the sentence and sometimes the context of the paragraph or document.|
|Michael||This ties in well with what Olga was telling us about the commercial applications of neural MT and what it means for post-editing.|
|Olga||For instance, you can see that neural machine translation output is still weaker on the terminology, and it’s pretty… it definitely needs work when it gets to handling unknown words. It’s not doing that great. So, when we talk about, when we think about commercial application of neural machine translation, I would still be very cautious. Commercial, when I say commercial, I mean adoption of neural machine translation by large enterprises that are currently machine translation users. I would be very cautious because their post-editors might have challenges with post-editing for adequacy and they might be easily misled by the fluency of the output.|
|Michael||When the editors are challenged, does that mean less efficiency? What does that mean in the actual work that they’re doing?|
|Olga||I wouldn’t say it’s less efficiency. Probably the fluency of neural MT output is going to help editors edit faster, but I’d be cautious around the accuracy of their final output. When you are looking at something that’s very fluent that sounds near—well not near human translation—but that sounds very natural and fluent, you might not be paying attention to details. So, I would imagine that post-editors might miss certain accuracy details. Because the translation is so readable, they might miss a term or they might miss a mistranslation because they could be misled by how fluent the output is.
So, I don’t think we’re talking about less productivity but I think to post-edit neural machine translation will require a slightly different skillset than post-editing statistical machine translation; just like post-editing statistical machine translation is different from post-editing rule-based machine translation because you are looking for different things and you are dealing with different patterns.
|Michael||Yes, that is a big distinction and a good one to highlight because I remember initially people would talk about translators doing post-editing, years and years ago. It was a big step forward when you start saying “no, we’re having editors do that work.” And now, it looks like another step forward with what the editors are actually working on. That’s helpful. Do you see any changes between the buyer and supplier relationship? You mentioned caution, but do you see how neural MT may impact that?|
|Olga||Buyer and supplier relationship. Well, I would imagine, again, if neural machine translation, if the unknown terminology issue is resolved and if the adequacy issue is resolved, I would imagine that the quality is going to get higher. And we definitely see that the quality is getting higher. So, I think what’s going to happen is there will be a certain expectation on the buyer side that pricing will be revised, and they will probably expect innovative pricing models around neural machine translation just based on the fact that the editing quality is improving.
So, I would expect that the buyer, understanding that the post-editor is dealing with a significantly better output, I would imagine that the buyer would want to see higher discounts or, as I said, different pricing models that take into consideration the quality of machine translation.
|Michael||Do you see, perhaps, new use cases for buyers?|
|Olga||I think that publishing, I think that buyers are being cautious around publishing raw machine translation output because of the quality and because there is a certain mistrust towards the quality of machine translation output. I think that what’s going to happen is the adoption of raw machine translation for cross-country selling or for publishing knowledge bases or for publishing basically any content that could be published as raw machine translation now, and is not getting published because there is, as I said, certain mistrust towards the quality of the machine translation output, I expect that there will be a lot more adoption of raw machine translation.
The post-editing business case is not going away. The post-editing is going to be there, probably the edit distance, as I said, is going to be shrinking because the quality of the machine translation output is going to be higher but I still don’t see… again, I don’t see neural machine translation hitting the levels of human translation. So, machine translation with post-editing is still going to be a line of business. And it’s going to stay such for a while. But, there will be other use cases added because of the quality.
|Renato||Michael, we have heard from a lot of people for this podcast series, what are our main takeaways?|
|Michael||Well, for me, Renato, there are two things that stood out from the conversations—first, statistical MT and phrase-based MT systems have pretty much peaked. Adding more parallel content and rules to these systems will get very little gains to the current performance.|
|Renato||And what was the second?|
|Michael||Well, the second was the fact that the even though neural networks have made significant advances in image and voice recognition, there is still some way to go before they master translation, which really is the most difficult challenge for Artificial Intelligence.|
|Renato||Computers have managed to beat humans on interpreting x-rays, playing chess, the Chinese game of Go, and more recently poker, but they have yet to beat professional translators. That’s encouraging.|
|Michael||It is. It is. And what about you, Renato, what did you learn from this time?|
|Renato||Well, I am always learning, but I like the concept of the human and the machine working together and the fact that the translators and the developers can spend more time on activities that create value instead of performing repetitive tasks. I have always been a proponent of automating what is boring.|
|Michael||So with that in mind, how do we come to an end or conclusion for this series?|
|Renato||Let’s end it on a hopeful note, for the time being at least. We are recording this podcast in February 2017. This month, organized by a university, a group of four professional translators competed against three neural MT programs provided by Google, South Korea’s top Internet provider Naver, and Systran International.|
|Michael||And what happened in this competition?|
|Renato||Well, both sides were tasked with translating random English articles—literature and non-literature—into Korean and other Korean articles into English. A total of 50 minutes were given to translate the texts, and the translated works were then evaluated by two professional translators.|
|Michael||Of course it took seconds for the MT systems to translate it, so they definitely beat the humans on delivery time.|
|Renato||Of course, yes, but in blind evaluations the organizers said that the four professional translators scored an average of 25 out of 30 in translating Korean into English, while the MT software scored between 10 and 15.|
|Michael||So today, as the technology stands, humans are still much better than the machine.|
|Renato||Yes. The judges said the machine was unable to understand context and generated sentences that were grammatically awkward.|
|Michael||So that is the status of Neural Machine Translation in February 2017. Maybe we need to revisit this topic in three of four years—or who knows, even less.|
|Renato||Well, at the speed that things are going, who knows?|
End of conversation
Jost Zetzsche is an independent translator, localization consultant and writer. He is the author of “A Translator’s Tool Box for the 21st Century” and “Translation Matters.” (www.internationalwriters.com) and co-author of “Found in Translation: How Language Shapes Our Lives and Transforms the World”.
Olga Beregovaya is Vice President, Technology Solutions at Welocalize, and President of the Association for Machine Translation in the Americas.
Mike Dillinger is former President of the Association for Machine Translation for the Americas, and Manager, Taxonomy Team and Machine Translation at LinkedIn.
Chris Wendt is responsible for the planning and design of Microsoft’s machine translation services: Microsoft Translator, Bing Translator, Skype Translator and translation features in Office, Internet Explorer and Bing, as well as the subscription service available to the public. He guided the incubation of the original internal research project in the NLP group to one of the two most widely used automatic translation services on the web.
Subscribe to receive notifications about new episodes