In this post, Emmanuelle Lacore-Martin, a Lecturer in French in the School of Literatures, Languages and Cultures, discusses the impact of Google translate on students’ language learning…
Translation has long been a integral part of language teaching in the Department of French and Francophone studies, from French into English and from English into French. The latter in particular has been a pillar of grammar-based language teaching and has outlived recurring debates about possible conflict between rule-based teaching and the learning of communication in a second language.
Its endurance comes down to a straightforward belief in the idea that teaching language students the rules of morphology and syntax is freeing rather than stifling. Direct exposure to language in real-life contexts of communication represents an essential part of second language acquisition but will not promote effective learning without secure knowledge of essential linguistic rules.
Translation is one of the most creative and empowering way for students to understand, learn and apply grammar in context. Translation promotes better understanding of the first language because students learn to break down sentences into meaningful units and to recreate links and meaning in the target language. Translation is guided creative writing, but creative nonetheless and, as a result, students love it.
The guided yet creative nature of the exercise also makes translation a particularly rewarding form of assessment: one text allows for a near infinite number of renderings, encouraging originality within the rules on which students are being tested. Take-home translation exercises have always represented an essential part of learning and exam preparation within that model, which also informed course and assessment modelling.
And then Google Translate « happened ».
Almost overnight the machine translation service went from laughably unreliable to infinitely more efficient or even artful in the most frequently used languages. Professional translators were amazed to discover that the automaton’s renderings now rivalled some of their own. Machine translation was becoming astonishingly reliable, and it was freely and easily accessible to all. For us, this represented a challenge to fair assessment that deeply impacted our teaching and learning environment.
If the transformation was rapid, it took us some time to measure its impact. It generally took those who had little use for machine translation longer to realise what had happened. Teachers sometimes caught up with it later than students. But once they did, the questions started coming fast: was it still worth teaching translation, if the right algorithm could one day supersede the skills of professional translators? Could students still be motivated to learn to translate? And how could we still assess students on take-home translation exercises, when everyone could produce a first class translation at the touch of a button?
A conversation had to start between teachers and students. For some time, machine translation had remained the proverbial elephant in the room as teachers marked ever more impressive take-home assignments while half-heartedly trying to convince themselves of the increasing efficiency of their teaching skills. As it quickly became obvious that simply prohibiting the use of online translation tools was never going to be practical or effective in itself, we began to rethink long-trusted modes of teaching and assessing.
In the end, translation had to stay. It did not make sense to wilfully renounce such an essential and well-liked exercise. Education is key: acknowledging these tools can help us educate students about the possibilities they afford as well as the danger of relying on them at all stages of language acquisition. Research into the pedagogical value of machine translation is still in its infancy but already supports what common sense readily perceives: that machines that translate do not teach us how to translate, that the result they provide is of little to no value to the learner who remains blind to the process.
In a powerful way, the transformation of machine translation is leading us to review teaching and assessment formats. While we warn students against machine translation, and prohibit its use in assessed take-home exercises, we rethink translation exercises to reflect a necessary shift from result-based to process-based assignments. Translation exercises must now involve a reflective aspect – questions on translation choices, designed to help students identify and negotiate more intricate structures while also reflecting on their command of linguistic rules. Reflective answers are weighted as heavily as the translated text as we reaffirm the absolute necessity of advanced linguistic expertise.
These developments give us the opportunity to enthuse students about their own skills and remind them of the creative nature of language acquisition. For all their recently acquired efficiency, automatic translators still fare poorly on non-literal use of language: plays on words, breaks in register, or aspects of cultural context often remain beyond their reach. This gives us incentive to develop more creative ways of « Google-translate-proofing » translation assignments: from translating humour to exercises involving stylistic variation or homophonic translation (a playful exercise based not on the meaning of words, but on their phonetic similarity), the possibilities are endless to inspire students and remind them and ourselves of the agility of the human mind in the use of languages.