ABSTRACT

Machine translation (MT), the automatic translation of text from one human language into another, was one of the first non-numeric applications of modern digital computers. Its early days in the aftermath of the Second World War were marked by naïve optimism and ultimate disappointment, but by the first decade of the new millennium MT had become ubiquitous and massively used. In this chapter Dorothy Kenny traces the historical development of MT, pointing up the technological and geopolitical factors that reinvigorated MT research on more than one occasion. Of particular significance was the shift from rule-based systems to data-driven systems, in which machines ‘learned’ probabilistic models of translation from an ever-growing supply of human translations available in digital form. Kenny goes on to describe the main methods used in MT, showing how the conceptualisation of meaning and translation shifts with each one: from the symbolism of rule-based systems; to the statistical approach that sees translation as a form of Bayesian optimization and is largely indifferent to meaning; to the connectionism of neural MT, in which meaning is seen as relational and distributed, and represented by the links between nodes in an artificial neural network. The chapter also considers some of the more troublesome aspects of contemporary MT, including its opacity and its co-dependent yet sometimes antagonistic relationship with human translation. Despite occasional tensions, Kenny argues that MT provides the ideal locus for translation studies to engage with some of the most pressing questions of our time, questions linked to the resurgence of interest in artificial intelligence, and to the future of human labour.