162 results on '"Evaluation of machine translation"'
Search Results
2. Multimodality and evaluation of machine translation: a proposal for investigating intersemiotic mismatches generated by the use of machine translation in multimodal documents
- Author
-
Thiago Blanch Pires
- Subjects
multimodality ,machine translation ,evaluation of machine translation ,intersemiotic mismatches ,Technology ,Language and Literature - Abstract
ABSTRACT: This article aims at proposing an interdisciplinary approach involving the areas of Multimodality and Evaluation of Machine Translation to explore new configurations of text-image semantic relations generated by machine translation results. The methodology consists of a brief contextualization of the research problem, followed by the presentation and study of concepts and possibilities of Multimodality and Evaluation of Machine Translation, with an emphasis on the notion of intersemiotic texture, proposed by Liu and O'Halloran (2009), and a study of machine translation error classification, proposed by Vilar et. al. (2006). Finally, the article suggests some potentialities and limitations when combining the application of both areas of investigation. KEYWORDS: multimodality; machine translation; evaluation of machine translation; intersemiotic mismatches. RESUMO: Este artigo tem como objetivo propor uma abordagem interdisciplinar envolvendo as áreas da multimodalidade e da avaliação de tradução automática para explorar novas configurações de relações semânticas entre texto e imagem geradas por resultados de traduções automáticas. A metodologia é composta de uma breve contextualização sobre o problema de investigação, seguida da apresentação e do estudo de conceitos e possibilidades da multimodalidade e da avaliação de tradução automática, com destaque para os trabalhos respectivamente sobre textura intersemiótica proposta por Liu e O’Halloran (2009) e classificação de erros de máquinas de tradução proposta por Vilar et. al. (2006). Ao final, o estudo sugere algumas potencialidades e limitações no uso conjugado de ambas as áreas. PALAVRAS-CHAVE: multimodalidade; tradução automática; avaliação de tradução automática; incompatibilidades intersemióticas.
- Published
- 2018
- Full Text
- View/download PDF
3. Beyond MT metrics in specialised translation: Automated and manual evaluation of machine translation output for freelance translators and small LSPs in the context of EU documents
- Author
-
Krzysztof Łoboda
- Subjects
tłumaczenie neuronowe ,institutional translation ,Environmental Engineering ,tłumaczenie maszynowe ,neural MT ,business.industry ,Computer science ,przekład instytucjonalny ,specialised translation ,Context (language use) ,ocena tłumaczenia maszynowego ,computer.software_genre ,Translation (geometry) ,Industrial and Manufacturing Engineering ,machine translation ,MT evaluation ,przekład specjalistyczny ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Natural language processing - Abstract
This paper discusses simplified methods of translation evaluation in two seemingly disparate areas: machine translation (MT) technology and translation for EU institutions. It provides a brief overview of methods for evaluating MT output and proposes simplified solutions for small LSPs and freelancers dealing with specialised translation of this kind. After discussing the context of the study and the process of machine translation, an analysis of fragments of the selected specialist text (an EU regulation) is carried out. The official English and Polish versions of this document provide the basis for a comparative evaluation of raw machine translation output obtained with selected commercially available (paid) neural machine translation engines (NMT). Quantitative analysis, including the Damerau-Levenshstein edit distance parameters and the number of erroneous segments in the text, combined with a manual qualitative analysis of errors and terminology can be a serviceable method for small LSPs and freelance translators to evaluate the usefulness of neural machine translation engines.
- Published
- 2021
4. Methodology for the Evaluation of Machine Translation Quality
- Author
-
Ani Ananyan and Roza Avagyan
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Quality (business) ,Artificial intelligence ,Evaluation of machine translation ,business ,Machine learning ,computer.software_genre ,computer ,media_common - Abstract
Along with the development and widespread dissemination of translation by artificial intelligence, it is becoming increasingly important to continuously evaluate and improve its quality and to use it as a tool for the modern translator. In our research, we compared five sentences translated from Armenian into Russian and English by Google Translator, Yandex Translator and two models of the translation system of the Armenian company Avromic to find out how effective these translation systems are when working in Armenian. It was necessary to find out how effective it would be to use them as a translation tool and in the learning process by further editing the translation. As there is currently no comprehensive and successful method of human metrics for machine translation, we have developed our own evaluation method and criteria by studying the world's most well-known methods of evaluation for automatic translation. We have used the post-editorial distance evaluation criterion as well. In the example of one sentence in the article, we have presented in detail the evaluation process according to the selected and developed criteria. At the end we have presented the results of the research and made appropriate conclusions.
- Published
- 2021
5. DiaBLa: a corpus of bilingual spontaneous written dialogues for machine translation
- Author
-
Eric Bilinski, Sophie Rosset, Thomas Lavergne, Rachel Bawden, School of Informatics [Edimbourg], University of Edinburgh, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), and University of Paris Sud (UPSUD)
- Subjects
FOS: Computer and information sciences ,Linguistics and Language ,French ,Machine translation ,Computer science ,02 engineering and technology ,Library and Information Sciences ,Corpus ,computer.software_genre ,[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] ,Language and Linguistics ,Education ,English ,0202 electrical engineering, electronic engineering, information engineering ,Dialogue ,Evaluation of machine translation ,Evaluation ,060201 languages & linguistics ,Computer Science - Computation and Language ,business.industry ,Context ,06 humanities and the arts ,Bilingual conversation ,Test set ,0602 languages and literature ,020201 artificial intelligence & image processing ,Artificial intelligence ,Computational linguistics ,business ,Computation and Language (cs.CL) ,computer ,Natural language processing ,Dataset - Abstract
We present a new English–French dataset for the evaluation of Machine Translation (MT) for informal, written bilingual dialogue. The test set contains 144 spontaneous dialogues (5700+ sentences) between native English and French speakers, mediated by one of two neural MT systems in a range of role-play settings. The dialogues are accompanied by fine-grained sentence-level judgments of MT quality, produced by the dialogue participants themselves, as well as by manually normalised versions and reference translations produceda posteriori. The motivation for the corpus is twofold: to provide (i) a unique resource for evaluating MT models, and (ii) a corpus for the analysis of MT-mediated communication. We provide an initial analysis of the corpus to confirm that the participants’ judgments reveal perceptible differences in MT quality between the two MT systems used.
- Published
- 2020
6. A Review and evaluation of Machine Translation methods for Lumasaaba
- Author
-
Peter Nabende
- Subjects
Computer science ,business.industry ,0202 electrical engineering, electronic engineering, information engineering ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,Artificial intelligence ,Evaluation of machine translation ,business ,computer.software_genre ,computer ,Natural language processing - Abstract
Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.
- Published
- 2020
7. Evaluation of Machine Translation Quality through the Metrics of Error Rate and Accuracy
- Author
-
Michal Munk, Petr Hájek, Jan Skalka, and Daša Munková
- Subjects
Similarity (geometry) ,Machine translation ,Computer science ,business.industry ,media_common.quotation_subject ,Word error rate ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,language.human_language ,0202 electrical engineering, electronic engineering, information engineering ,language ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Quality (business) ,Slovak ,Evaluation of machine translation ,Artificial intelligence ,State (computer science) ,business ,computer ,Sentence ,Natural language processing ,General Environmental Science ,media_common - Abstract
The aim of the paper is to find out whether it is necessary to use all automatic measures of error rate and accuracy when evaluating the quality of machine translation output from the synthetic Slovak language into the analytical English language. We used multiple comparisons for the analysis and visualized the results for each sentence through the icon graphs. Based on the results, we can state that all examined metrics, which are based on textual similarity, except the f-measure, are needed to be included in the MT quality evaluation when analyzing, sentence by sentence, the machine translation output.
- Published
- 2020
8. A Critique of Statistical Machine Translation
- Author
-
Andy Way
- Subjects
Linguistics and Language ,Machine translation ,Computer science ,media_common.quotation_subject ,computer.software_genre ,Machine translation software usability ,Language and Linguistics ,Linguistics ,Epistemology ,Example-based machine translation ,Surprise ,Rule-based machine translation ,Computer-assisted translation ,Evaluation of machine translation ,computer ,Dynamic and formal equivalence ,media_common - Abstract
Phrase-Based Statistical Machine Translation (PB-SMT) is clearly the leading paradigm in the field today. Nevertheless—and this may come as some surprise to the PB-SMT community—most translators and, somewhat more surprisingly perhaps, many experienced MT protagonists find the basic model extremely difficult to understand. The main aim of this paper, therefore, is to discuss why this might be the case. Our basic thesis is that proponents of PB-SMT do not seek to address any community other than their own, for they do not feel any need to do so. We demonstrate that this was not always the case; on the contrary, when statistical models of trans-lation were first presented, the language used to describe how such a model might work was very conciliatory, and inclusive. Over the next five years, things changed considerably; once SMT achieved dominance particularly over the rule-based paradigm, it had established a position where it did not need to bring along the rest of the MT community with it, and in our view, this has largely pertained to this day. Having discussed these issues, we discuss three additional issues: the role of automatic MT evaluation metrics when describing PB-SMT systems; the recent syntactic embellishments of PB-SMT, noting especially that most of these contributions have come from researchers who have prior experience in fields other than statistical models of translation; and the relationship between PB-SMT and other models of translation, suggesting that there are many gains to be had if the SMT community were to open up more to the other MT paradigms.
- Published
- 2021
9. Metric for Evaluation of Machine Translation Quality on the bases of Edit Distances and Reverse Translation
- Author
-
V.S. Kornilov, Lozovoy A. Yu., and V.M. Glushan
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Metric (mathematics) ,Quality (business) ,Evaluation of machine translation ,Artificial intelligence ,Translation (geometry) ,computer.software_genre ,business ,computer ,Natural language processing ,media_common - Published
- 2021
10. Exploring the Effectiveness of Employing Limited Resources for Deep Neural Pairwise Evaluation of Machine Translation
- Author
-
Katia Lida Kermanidis and Despoina Mouratidis
- Subjects
Artificial neural network ,Machine translation ,business.industry ,Computer science ,String (computer science) ,Concatenation ,computer.software_genre ,Schema (genetic algorithms) ,Metric (mathematics) ,Pairwise comparison ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
In this paper, a light resource learning schema, i.e. a schema that depends on limited resources, is introduced, which aims to choose the better translation between two machine translation (MT) outputs, based on information regarding the source segments (SSE) and string-based features. A concatenation of vectors, including mathematically calculated embeddings from the SSE, statistical MT (SMT) and neural MT (NMT) segments (S1 and S2 respectively), are used as input to a neural network (NN). Experiments are run for two different forms of text structure (one that is a formal, well-structured corpus (C2) and one that is informal (C1)) for the English (EN) – Greek (EL) language pair. Instead of relying on high-level experts’ annotations, a novel automatic metric is proposed for determining the better translation, namely the quality estimation (QE) score. This score is based on string-based features derived from both the SSE and the MT segments. Experimental results demonstrate a quite good performance for the proposed feed-forward NN, comparable to the existing state of-the-art models for MT evaluation that require more sophisticated resources.
- Published
- 2021
11. APROXIMANDO RESULTADOS DE TRADUÇÃO AUTOMÁTICA E IMAGENS EM DOCUMENTOS MULTIMODAIS
- Author
-
Thiago Blanch Pires and Augusto Velloso dos Santos Espindola
- Subjects
Linguistics and Language ,Literature and Literary Theory ,Machine translation ,Interface (Java) ,Computer science ,Intersemiotic Texture ,computer.software_genre ,Language and Linguistics ,Multimodalidade ,Multimodality ,Bridging (programming) ,Machine Translation Output Classification ,Mode (computer interface) ,Machine Translation ,Textura Intersemiótica ,P306-310 ,Evaluation of machine translation ,Focus (computing) ,Translating and interpreting ,Information retrieval ,Tradução Automática ,Classificação de Resultado de Tradução Automática ,Incompatibilidades Intersemióticas ,Intersemiotic Mismatches ,computer - Abstract
The aim of this article is to report on recent findings concerning the use of Google Translate outputs in multimodal contexts. Development and evaluation of machine translation often focus on verbal mode, but accounts by the area on the exploration of text-image relations in multimodal documents translated automatically are rare. Thus, this work seeks to describe just what are such relations and how to describe them. To do so this investigation explores the problem through an interdisciplinary interface, involving Machine Translation and Multimodality to analyze some examples from the Wikihow website; and then it reports on recent investigation on suitable tools and methods to properly annotate these issues from within a long-term purpose to assemble a corpus. Finally, this article provides a discussion on the findings, including some limitations and perspectives for future research. Resumo O objetivo deste artigo é relatar os recentes achados sobre o uso de resultados do Google Tradutor em contextos multimodais. O desenvolvimento e a avaliação da tradução automática geralmente se concentram no modo verbal, mas são raros os relatos da área sobre a exploração das relações texto-imagem em documentos multimodais traduzidos automaticamente. Assim, este trabalho busca caracterizar o que são tais relações e como descrevê-las. Para tal, esta investigação examina o problema através de uma interface interdisciplinar envolvendo tradução automática e multimodalidade para analisar alguns exemplos do site Wikihow; em seguida, este trabalho descreve estudos recentes sobre ferramentas e métodos adequados para a anotação destas questões com o propósito de construir um corpus a longo prazo. Finalmente, este artigo fornece uma discussão sobre os achados, incluindo algumas limitações e perspectivas para pesquisas futuras.
- Published
- 2021
12. La confianza de los estudiantes de traducción en la traducción automática: ¿demasiado buena para ser verdad?
- Author
-
Anthony Pym and Ester Torres-Simón
- Subjects
Point (typography) ,Machine translation ,Learning community ,traducción automàtica ,Resistance (psychoanalysis) ,General Medicine ,Evaluation of machine translation ,computer.software_genre ,Psychology ,computer ,Linguistics ,Terminology - Abstract
Advances in neural machine translation have reached the point where professional translators can work faster and produce better terminology when post-editing machine translation as opposed to fully human translation. Most translator-training programs thus include courses in how to post-edit machine translation. Many professional translators, however, are opposed to the use of post-editing rather than fully human translation, resulting in a suite of negative opinions about machine translation within the teaching situation. Here we report on two cases in which classroom activities with translation students involved the post-editing and evaluation of machine translation. We assess the students’ actual performances in the two modes (fully human translation as opposed to the post-editing of machine translation), which we then compare with the students’ comments on machine translation and their personal experience with it. It is found that although the students generally recognized the greater efficiency of post-editing, they nevertheless remained generally opposed to machine translation in principle, indicating extensive resistance within the learning community.
- Published
- 2021
13. SentSim: Crosslingual Semantic Evaluation of Machine Translation
- Author
-
Junchen Zhao, Lucia Specia, and Yurun Song
- Subjects
Machine translation ,Computer science ,business.industry ,media_common.quotation_subject ,02 engineering and technology ,computer.software_genre ,03 medical and health sciences ,0302 clinical medicine ,Semantic similarity ,Metric (mathematics) ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,020201 artificial intelligence & image processing ,Quality (business) ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Sentence ,Word (computer architecture) ,Natural language processing ,media_common - Abstract
Machine translation (MT) is currently evaluated in one of two ways: in a monolingual fashion, by comparison with the system output to one or more human reference translations, or in a trained crosslingual fashion, by building a supervised model to predict quality scores from human-labeled data. In this paper, we propose a more cost-effective, yet well performing unsupervised alternative SentSim: relying on strong pretrained multilingual word and sentence representations, we directly compare the source with the machine translated sentence, thus avoiding the need for both reference translations and labelled training data. The metric builds on state-of-the-art embedding-based approaches – namely BERTScore and Word Mover’s Distance – by incorporating a notion of sentence semantic similarity. By doing so, it achieves better correlation with human scores on different datasets. We show that it outperforms these and other metrics in the standard monolingual setting (MT-reference translation), a well as in the source-MT bilingual setting, where it performs on par with glass-box approaches to quality estimation that rely on MT model information.
- Published
- 2021
14. Statistical Error Analysis of Machine Translation: The Case of Arabic
- Author
-
Nourddine Enneya, Tarik Boudaa, and Mohamed El Marouani
- Subjects
General Computer Science ,Machine translation ,Computer science ,business.industry ,media_common.quotation_subject ,Context (language use) ,computer.software_genre ,Pipeline (software) ,Identification (information) ,Fluency ,Perception ,Quality (business) ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Natural language processing ,media_common - Abstract
In this paper, we present a study of an automatic error analysis in the context of machine translation into Arabic. We have created a pipeline tool allowing evaluation of machine translation outputs and identification of errors. A statistical analysis based on cumulative link models is performed also in order to have a global overview about errors of statistical machine translation from English to Arabic, and to investigate the relationship between encountered errors and the human perception of machine translation quality. As expected, this analysis demonstrates that the impact of lexical, semantic and reordering errors is more significant than other errors related to the fluency of the machine translation outputs.
- Published
- 2020
15. MACHINE TRANSLATION: A CRITICAL LOOK AT THE PERFORMANCE OF RULE-BASED AND STATISTICAL MACHINE TRANSLATION
- Author
-
Brita Banitz
- Subjects
Linguistics and Language ,Statistical machine translation ,Literature and Literary Theory ,Machine translation ,Computer science ,statistical machine translation ,computer.software_genre ,Language and Linguistics ,German ,Evaluation of machine translation output ,Evaluation of machine translation ,rule-based machine translation ,Translation system ,business.industry ,lcsh:Translating and interpreting ,Rule-based system ,lcsh:P306-310 ,language.human_language ,Rule-based machine translation ,language ,Critical assessment ,Artificial intelligence ,evaluation of machine translation output ,business ,computer ,Natural language processing - Abstract
The essay provides a critical assessment of the performance of two distinct machine translation systems, Systran and Google Translate. First, a brief overview of both rule-based and statistical machine translation systems is provided followed by a discussion concerning the issues involved in the automatic and human evaluation of machine translation outputs. Finally, the German translations of Mark Twain’s The Awful German Language translated by Systran and Google Translate are being critically evaluated highlighting some of the linguistic challenges faced by each translation system. O ensaio fornece uma avaliação crítica do desempenho de dois sistemas distintos de tradução automática, Systran e Google Translate. Primeiro, é fornecida uma breve visão geral dos sistemas de tradução automática baseados em regras e estatísticas, seguida de uma discussão sobre os problemas envolvidos na avaliação automática e humana das outputs de tradução automática. Por fim, as traduções em alemão de Mark Twain, traduzidas por Systran e Google Translate, estão sendo avaliadas criticamente, destacando alguns dos desafios linguísticos enfrentados por cada sistema de tradução.
- Published
- 2020
16. Evaluation of Arabic to English Machine Translation Systems
- Author
-
Jihad Mohamad Alja'am, Somaya Al-Maadeed, Jezia Zakraoui, and Moutaz Saleh
- Subjects
Arabic machine translation ,Machine translation ,Computer science ,Arabic ,business.industry ,media_common.quotation_subject ,computer.software_genre ,Translation (geometry) ,language.human_language ,Evaluation methods ,language ,Quality (business) ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Natural language processing ,Performance quality ,media_common - Abstract
Arabic machine translation has an important role in most NLP tasks. Many machine translation systems that support Arabic exist already; however the quality of the translation needs to be improved. In this paper, we review different research approaches for Arabic-to-English machine translation. The approaches use various evaluation methods, datasets, and tools to measure their performance. Moreover, this paper sheds light on several methods and assessment efforts, and future ideas to improve the machine translation quality of Arabic-to-English. The review results depict three major findings; first neural machine translation approaches outperform other approaches in many aspects. Second, the recently emerging attention-based approach is being useful to improve the performance of neural machine translation for all languages. Third, the translation performance quality depends on the quality of the dataset, well-behaved aligned corpus, and the evaluation technique used.
- Published
- 2020
17. Dataset for comparable evaluation of machine translation between 11 South African languages
- Author
-
Martin Puttkammer and Cindy A. McKellar
- Subjects
Machine translation ,Computer science ,computer.software_genre ,lcsh:Computer applications to medicine. Medical informatics ,Arts and Humanity ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,Evaluation of machine translation ,lcsh:Science (General) ,030304 developmental biology ,0303 health sciences ,Multidisciplinary ,business.industry ,Natural language processing ,Human language technology ,Languages of Africa ,Language technology ,lcsh:R858-859.7 ,Artificial intelligence ,business ,computer ,Automatic evaluation ,030217 neurology & neurosurgery ,lcsh:Q1-390 - Abstract
This data article describes the Autshumato machine translation evaluation set. The evaluation set contains data that can be used to evaluate machine translation systems between any of the 11 official South African languages. The dataset is parallel with four reference translations available for each of the following languages: Afrikaans, English, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, Siswati, Tshivenḓa and Xitsonga. Keywords: Machine translation, Automatic evaluation, Natural language processing, Human language technology
- Published
- 2020
18. Contrasting Human Opinion of Non-factoid Question Answering with Automatic Evaluation
- Author
-
Tianbo Ji, Gareth J. F. Jones, and Yvette Graham
- Subjects
Computer science ,business.industry ,Factoid ,computer.software_genre ,Automatic summarization ,Field (computer science) ,Task (project management) ,Comprehension ,Test set ,Question answering ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Natural language processing - Abstract
Evaluation in non-factoid question answering tasks generally takes the form of computation of automatic metric scores for systems on a sample test set of questions against human-generated reference answers. Conclusions drawn from the scores produced by automatic metrics inevitably lead to important decisions about future directions. Metrics commonly applied include ROUGE, adopted from the related field of summarization, BLEU and Meteor, both of the latter originally developed for evaluation of machine translation. In this paper, we pose the important question, given that question answering is evaluated by application of automatic metrics originally designed for other tasks, to what degree do the conclusions drawn from such metrics correspond to human opinion about system-generated answers? We take the task of machine reading comprehension (MRC) as a case study and to address this question, provide a new method of human evaluation developed specifically for the task at hand.
- Published
- 2020
19. On the Evaluation of Machine Translation n-best Lists
- Author
-
Matt Post, Huda Khayrallah, Jacob Bremerman, and Douglas W. Oard
- Subjects
Measure (data warehouse) ,Information retrieval ,Machine translation ,Computer science ,media_common.quotation_subject ,Closeness ,Principal (computer security) ,computer.software_genre ,Preference ,Set (abstract data type) ,Quality (business) ,Evaluation of machine translation ,computer ,media_common - Abstract
The standard machine translation evaluation framework measures the single-best output of machine translation systems. There are, however, many situations where n-best lists are needed, yet there is no established way of evaluating them. This paper establishes a framework for addressing n-best evaluation by outlining three different questions one could consider when determining how one would define a ‘good’ n-best list and proposing evaluation measures for each question. The first and principal contribution is an evaluation measure that characterizes the translation quality of an entire n-best list by asking whether many of the valid translations are placed near the top of the list. The second is a measure that uses gold translations with preference annotations to ask to what degree systems can produce ranked lists in preference order. The third is a measure that rewards partial matches, evaluating the closeness of the many items in an n-best list to a set of many valid references. These three perspectives make clear that having access to many references can be useful when n-best evaluation is the goal.
- Published
- 2020
20. Evaluation of Machine Translation Methods applied to Medical Terminologies
- Author
-
Yann Briand, Florent Desgrippes, and Konstantinos Skianis
- Subjects
Machine translation ,business.industry ,Computer science ,Interoperability ,Sample (statistics) ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Data science ,Work (electrical) ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Evaluation of machine translation ,business ,computer ,0105 earth and related environmental sciences - Abstract
Medical terminologies resources and standards play vital roles in clinical data exchanges, enabling significantly the services’ interoperability within healthcare national information networks. Health and medical science are constantly evolving causing requirements to advance the terminologies editions. In this paper, we present our evaluation work of the latest machine translation techniques addressing medical terminologies. Experiments have been conducted leveraging selected statistical and neural machine translation methods. The devised procedure is tested on a validated sample of ICD-11 and ICF terminologies from English to French with promising results.
- Published
- 2020
21. Big Data and Machine Learning for Evaluating Machine Translation
- Author
-
Rashmi Agrawal and Simran Kaur Jolly
- Subjects
Machine translation ,Computer science ,business.industry ,media_common.quotation_subject ,Ambiguity ,computer.software_genre ,Machine learning ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Categorization ,Expectation–maximization algorithm ,Classifier (linguistics) ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Sentence ,media_common - Abstract
Human Evaluation of Machine Translation is the most important aspect of improving accuracy of translation output which can be used for text categorization ahead. In this article we describe approach of text classification based on parallel corpora and natural language processing techniques. A text classifier is built on multilingual texts by translating different features of the model using the Expectation Maximization Algorithm. Cross-lingual text classification is the process of classifying text into different languages during translation by using training data. The main idea underlying this mechanism is using training data from parallel corpus and applying classification algorithms for reducing the distortion and alignment errors in Machine translation. In this chapter a Classification Model is trained which directs source language to target language on the basis of translation knowledge and parameters defined. The Algorithm adopted here is Expectation Maximization Algorithm which removes ambiguity in parallel corpora by aligning source sentence to target sentence. It considers possible translations from source to target language and selects the one that fits the model on the basis of BLEU (bilingual evaluation understudy) score. The only requirement of this learning is unlabelled data in the target language. The algorithm can be evaluated accurately by running a separate classifier on different parallel corpora. We use Monolingual Corpora and Machine Translation in our study to see the effect of both the models on our parallel corpora.
- Published
- 2020
22. Informative Manual Evaluation of Machine Translation Output
- Author
-
Maja Popović
- Subjects
050101 languages & linguistics ,business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,Rank (computer programming) ,Novelty ,02 engineering and technology ,Translation (geometry) ,computer.software_genre ,Domain (software engineering) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Quality (business) ,Evaluation of machine translation ,Artificial intelligence ,business ,Machine translating ,computer ,Natural language processing ,media_common - Abstract
This work proposes a new method for manual evaluation of Machine Translation (MT) output based on marking actual issues in the translated text. The novelty is that the evaluators are not assigning any scores, nor classifying errors, but marking all problematic parts (words, phrases, sentences) of the translation. The main advantage of this method is that the resulting annotations do not only provide overall scores by counting words with assigned tags, but can be further used for analysis of errors and challenging linguistic phenomena, as well as inter-annotator disagreements. Detailed analysis and understanding of actual problems are not enabled by typical manual evaluations where the annotators are asked to assign overall scores or to rank two or more translations. The proposed method is very general: it can be applied on any genre/domain and language pair, and it can be guided by various types of quality criteria. Also, it is not restricted to MT output, but can be used for other types of generated text.
- Published
- 2020
23. Optimizing Automatic Evaluation of Machine Translation with the ListMLE Approach
- Author
-
Maoxi Li and Mingwen Wang
- Subjects
Word embedding ,General Computer Science ,Machine translation ,Computer science ,business.industry ,02 engineering and technology ,Translation (geometry) ,computer.software_genre ,Machine learning ,Ranking (information retrieval) ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Semantic mapping ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Learning to rank ,Evaluation of machine translation ,Language model ,Artificial intelligence ,0305 other medical science ,business ,computer - Abstract
Automatic evaluation of machine translation is critical for the evaluation and development of machine translation systems. In this study, we propose a new model for automatic evaluation of machine translation. The proposed model combines standard n-gram precision features and sentence semantic mapping features with neural features, including neural language model probabilities and the embedding distances between translation outputs and their reference translations. We optimize the model with a representative list-wise learning to rank approach, ListMLE, in terms of human ranking assessments. The experimental results on WMT’2015 Metrics task indicated that the proposed approach yields significantly better correlations with human assessments than several state-of-the-art baseline approaches. In particular, the results confirmed that the proposed list-wise learning to rank approach is useful and powerful for optimizing automatic evaluation metrics in terms of human ranking assessments. Deep analysis also demonstrated that optimizing automatic metrics with the ListMLE approach is a reasonable method and adding the neural features can gain considerable improvements compared with the traditional features.
- Published
- 2018
24. Statistical machine translation of Indian languages: a survey
- Author
-
Nadeem Khan Jadoon, Usama Ijaz Bajwa, Farooq Ahmad, and Waqas Anwar
- Subjects
0209 industrial biotechnology ,Phrase ,Machine translation ,Computer science ,02 engineering and technology ,computer.software_genre ,Machine translation software usability ,Telugu ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Gujarati ,Evaluation of machine translation ,Hindi ,business.industry ,language.human_language ,Bengali ,Tamil ,language ,Malayalam ,020201 artificial intelligence & image processing ,Indian language ,Artificial intelligence ,Urdu ,business ,computer ,Software ,Natural language processing - Abstract
In this study, performance analysis of a state-of-art phrase-based statistical machine translation (SMT) system is presented on eight Indian languages. State of the art in SMT on different Indian languages to English language has also been discussed briefly. The motivation of this study was to promote the development of SMT and linguistic resources for these Indian language pairs, as the current systems are in infancy stage due to sparse data resources. EMILLE and crowdsourcing parallel corpora have been used in this study for experimental purposes. The study is concluded by presenting the performance of baseline SMT system for Indian languages (Bengali, Gujarati, Hindi, Malayalam, Punjabi, Tamil, Telugu and Urdu) into English with average 10–20 % accurate results for all the language pairs. As a result of this study, both of these annotated parallel corpora resources and SMT system will serve as benchmarks for future approaches to SMT in Hindi → English, Urdu → English, Punjabi → English, Telugu → English, Tamil → English, Gujarati → English, Bengali → English and Malayalam → English.
- Published
- 2017
25. Translation Errors Made by Indonesian-English Translators in Crowdsourcing Translation Application
- Author
-
Mansur Akil, Zainar M Salam, and Andi Qashas Rahman
- Subjects
Machine translation ,business.industry ,Computer science ,Education (General) ,PE1-3729 ,computer.software_genre ,Crowdsourcing ,Machine translation software usability ,Linguistics ,Example-based machine translation ,translation, errors, crowdsourcing, application, indonesian-english ,English language ,Rule-based machine translation ,Computer-assisted translation ,Evaluation of machine translation ,Artificial intelligence ,L7-991 ,business ,computer ,Dynamic and formal equivalence ,Natural language processing - Abstract
The research aims to describe the kinds of translation errors made by Indonesian-English translators in crowdsourcing translation application and the dominant kind of translation errors made by Indonesian-English translators in crowdsourcing translation application. The problem statements of the research are (1) What kinds of translation errors made by Indonesian-English translators in crowdsourcing translation application? (2) What is dominant kind of translation error made by Indonesian-English translators in crowdsourcing translation application?. The method used on the research was descriptive qualitative. The subject of the research was the Indonesian-English translators of crowdsourcing translation application. The researcher took 50 Indonesian-English translation requests (source language texts) and all of its’ translations in English (target language texts) from the crowdsourcing translation application to find out the translation errors. Then the researcher classified them into 5 kinds of translation errors. The results of the research revealed that there were 50 source language texts that translated into 353 target language texts with 350 variations of translation in total. There were 75 translation errors in total or 21.25% from all if the 353 target language texts. There were 3 (0,85%) translation errors in inversion of meaning, 11 (3.12%) translation errors in omission of meaning, 8 (2.27%) translation errors in addition of meaning, 44 (12.46%) translation errors in deviation of meaning, and 9 (2.55%) in modification of meaning. Dominant kind of the translation error that the Indonesian-English translators made was in deviation of meanings, it was more than half (58,67%) of the total translation errors.
- Published
- 2017
26. BBN’s low-resource machine translation for the LoReHLT 2016 evaluation
- Author
-
Rabih Zbib, Zhongqiang Huang, and Hendra Setiawan
- Subjects
Linguistics and Language ,Machine translation ,business.industry ,Computer science ,Transfer-based machine translation ,computer.software_genre ,Machine learning ,Machine translation software usability ,Language and Linguistics ,Example-based machine translation ,Rule-based machine translation ,Artificial Intelligence ,Computer-assisted translation ,Synchronous context-free grammar ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Software ,Natural language processing - Abstract
We describe BBN's contribution to the machine translation (MT) task in the LoReHLT 2016 evaluation, focusing on the techniques and methodologies employed to build the Uyghur---English MT systems in low-resource conditions. In particular, we discuss the data selection process, morphological segmentation of the source, neural network feature models, and our use of a native informant and related language resources. Our final submission for the evaluation was ranked first among all participants.
- Published
- 2017
27. On integrating a language model into neural machine translation
- Author
-
Orhan Firat, Kelvin Xu, Yoshua Bengio, Caglar Gulcehre, and Kyunghyun Cho
- Subjects
Machine translation ,Computer science ,business.industry ,02 engineering and technology ,010501 environmental sciences ,Transfer-based machine translation ,computer.software_genre ,Machine learning ,01 natural sciences ,Machine translation software usability ,Theoretical Computer Science ,Human-Computer Interaction ,Example-based machine translation ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Synchronous context-free grammar ,Evaluation of machine translation ,Artificial intelligence ,Language model ,business ,computer ,Software ,Natural language processing ,0105 earth and related environmental sciences - Abstract
Recent advances in end-to-end neural machine translation models have achieved promising results on high-resource language pairs such as En -> Fr and En -> De. One of the major factor behind these successes is the availability of high quality parallel corpora. We explore two strategies on leveraging abundant amount of monolingual data for neural machine translation. We observe improvements by both combining scores from neural language model trained only on target monolingual data with neural machine translation model and fusing hidden-states of these two models. We obtain up to 2 BLEU improvement over hierarchical and phrase-based baseline on low-resource language pair, Turkish -> English. Our method was initially motivated towards tasks with less parallel data, but we also show that it extends to high resource languages such as Cs -> En and De -> En translation tasks, where we obtain 0.39 and 0.47 BLEU improvements over the neural machine translation baselines, respectively. (C) 2017 Elsevier Ltd. All rights reserved.
- Published
- 2017
28. A Hybrid Approach Using Phrases and Rules for Hindi to English Machine Translation
- Author
-
Niladri Chatterjee and Susmita Gupta
- Subjects
Hindi ,Machine translation ,business.industry ,Computer science ,computer.software_genre ,Hybrid approach ,Machine translation software usability ,language.human_language ,Linguistics ,Example-based machine translation ,language ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing - Published
- 2017
29. Calling translation to the bar
- Author
-
Daniele Orlando and Orlando, Daniele
- Subjects
translator training ,Linguistics and Language ,Legal translation ,error analysi ,translation competence ,Literature and Literary Theory ,Computer science ,Speech recognition ,error analysis ,legal translation ,empirical study ,Keystroke logging ,Language and Linguistics ,Linguistics ,Education ,Terminology ,Empirical research ,Phraseology ,Criminal law ,Evaluation of machine translation ,Competence (human resources) - Abstract
This paper proposes a comparative analysis of the translation errors made by prospective legal translation trainees, with a special focus on the (mis)use of legal terminology and phraseology. The investigation relies on the data produced and collected within a wider empirical study on the translation problems faced by a cohort of translation graduates with no specialisation in legal translation on the one hand, and a cohort of linguistically-skilled lawyers with no translation-related qualifications on the other, who translated the same criminal law document from English into Italian. The translation errors made by the two cohorts have been classified on the basis of the categories proposed by Mossop (2014) and assessed following the severity scale devised by Vollmar (2001). The Translation Quality Index (cf. Schiaffino and Zearo 2006) thus obtained has allowed for the ranking of the participants in the five quality levels identified for legal translation by Prieto Ramos (2014). The findings of the quantitative and qualitative analyses of errors are also traced back to the participants’ translation process by triangulating data from the different collection methods used within the empirical study, i.e. screen recording, keystroke logging and questionnaires, with particular reference to time and reference material use. The specific design of this investigation, which considers the participants’ prior education as additional variable, allows for the identification of a possible correlation between the different backgrounds of the translators and the quality of their translations, with general consequences on the conceptualisation of legal translation competence and effective training.
- Published
- 2017
30. COMPREHENSIVE APPROACH FOR BILINGUAL MACHINE TRANSLATION
- Author
-
M. Hanumanthappa and Sharanbasappa Honnashetty
- Subjects
Machine translation ,Computer science ,business.industry ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Transfer-based machine translation ,computer.software_genre ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Machine translation software usability ,Linguistics ,Example-based machine translation ,Rule-based machine translation ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Computer-assisted translation ,Synchronous context-free grammar ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Machine Translation has been a major focus of the NLP group since 1999, the principal focus of the Natural Language Processing group is to build a machine translation system that automatically learns translation mappings from bilingual corpora. This paper explores a novel approach for phrase based machine translation from English to Kannada and Kannada to English. The source text is analyzed then simple sentences are translated using the rules and the complex sentences are split into simple sentences later translation is performed.
- Published
- 2017
31. Comparing a Hand-crafted to an Automatically Generated Feature Set for Deep Learning: Pairwise Translation Evaluation
- Author
-
Katia Lida Kermanidis and Despoina Mouratidis
- Subjects
Machine translation ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,String (computer science) ,Pattern recognition ,computer.software_genre ,Random forest ,Support vector machine ,Rule-based machine translation ,Artificial intelligence ,Evaluation of machine translation ,business ,computer - Abstract
The automatic evaluation of machine translation (MT) has proven to be a very significant research topic. Most automatic evaluation methods focus on the evaluation of the output of MT as they compute similarity scores that represent translation quality. This work targets on the performance of MT evaluation. We present a general scheme for learning to classify parallel translations, using linguistic information, of two MT model outputs and one human (reference) translation. We present three experiments to this scheme using neural networks (NN). One using string based hand-crafted features (Exp1), the second using automatically trained embeddings from the reference and the two MT outputs (one from a statistical machine translation (SMT) model and the other from a neural ma-chine translation (NMT) model), which are learned using NN (Exp2), and the third experiment (Exp3) that combines information from the other two experiments. The languages involved are English (EN), Greek (GR) and Italian (IT) segments are educational in domain. The proposed language-independent learning scheme which combines information from the two experiments (experiment 3) achieves higher classification accuracy compared with models using BLEU score information as well as other classification approaches, such as Random Forest (RF) and Support Vector Machine (SVM).
- Published
- 2019
32. On The Evaluation of Machine Translation Systems Trained With Back-Translation
- Author
-
Sergey Edunov, Michael Auli, Myle Ott, and Marc'Aurelio Ranzato
- Subjects
FOS: Computer and information sciences ,Matching (statistics) ,Computer Science - Computation and Language ,Computer science ,business.industry ,media_common.quotation_subject ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Fluency ,0202 electrical engineering, electronic engineering, information engineering ,Natural (music) ,020201 artificial intelligence & image processing ,Quality (business) ,Evaluation of machine translation ,Artificial intelligence ,Language model ,business ,computer ,Computation and Language (cs.CL) ,Natural language processing ,0105 earth and related environmental sciences ,BLEU ,media_common - Abstract
Back-translation is a widely used data augmentation technique which leverages target monolingual data. However, its effectiveness has been challenged since automatic metrics such as BLEU only show significant improvements for test examples where the source itself is a translation, or translationese. This is believed to be due to translationese inputs better matching the back-translated training data. In this work, we show that this conjecture is not empirically supported and that back-translation improves translation quality of both naturally occurring text as well as translationese according to professional human translators. We provide empirical evidence to support the view that back-translation is preferred by humans because it produces more fluent outputs. BLEU cannot capture human preferences because references are translationese when source sentences are natural text. We recommend complementing BLEU with a language model score to measure fluency., ACL 2020
- Published
- 2019
33. Research on Machine Translation Automatic Evaluation Based on Extended Reference
- Author
-
Baozhong Gao, Na Li, Weizhi Xu, Hui Yu, Wentao Su, and Yang Li
- Subjects
Translation system ,Machine translation ,Computer science ,Process (engineering) ,business.industry ,computer.software_genre ,Translation (geometry) ,Evaluation methods ,Metric (mathematics) ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing ,Natural language - Abstract
Language is the main carrier of communication between cultures, but the translation between languages has become the biggest problem of people's communication. Machine translation is a process that uses computer to transform a natural language into another natural language. The automatic evaluation of machine translation is an important research content in machine translation technology. It can discover defects in translation system and promote its development. It has achieved rich fruits, and various evaluation methods emerge endlessly after several decades of development of the automatic evaluation method. In this paper, three kinds of representative evaluation methods are introduced and their respective advantages and disadvantages are analyzed. In addition, we describe the evaluation technique based on reference. It plays an important role in improving the performance of automatic evaluation methods although the coverage expansion of reference is not the main method. Finally, we summarize the development trends of automatic evaluation metric based on extended reference and related issues that need to be further addressed.
- Published
- 2019
34. Automatic Evaluation Method Using Dependency Parsing Model Based on Maximum Entropy
- Author
-
Na Li and Hui Yu
- Subjects
Machine translation ,Computer science ,business.industry ,Principle of maximum entropy ,Process (computing) ,Computer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing) ,Pattern recognition ,computer.software_genre ,Dependency grammar ,Metric (mathematics) ,Evaluation methods ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Sentence - Abstract
The automatic evaluation of machine translation has achieved rich results. Various evaluation methods have emerged. This paper mainly introduces the automatic evaluation method using dependency parsing model based on maximum entropy. The dependency tree reflects the relationship between words in the sentence. Therefore, we use the dependency tree corresponding to the reference and the dependency tree corresponding to the machine translation to judge the accuracy of machine translation. In our method, the dependency tree of the reference is used as the training corpus to obtain the dependency parsing model. The model is used to generate the score of the machine translation dependency tree. Maximum entropy method is used in the dependency parsing process. The experimental results show that, at the system level, the new metric based on maximum entropy dependency parsing model is effective.
- Published
- 2019
35. The Research on Quality Evaluation of Machine Translation Text in Computer Aided Translation
- Author
-
Shan Lu and Bo Hu
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,Computer-aided ,Quality (business) ,Evaluation of machine translation ,Artificial intelligence ,computer.software_genre ,Translation (geometry) ,business ,computer ,Natural language processing ,media_common - Published
- 2019
36. Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation
- Author
-
Timothy Baldwin, Nitika Mathur, and Trevor Cohn
- Subjects
Machine translation ,Computer science ,business.industry ,Context (language use) ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Field (computer science) ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Word (computer architecture) ,Sentence ,Natural language processing ,0105 earth and related environmental sciences - Abstract
Accurate, automatic evaluation of machine translation is critical for system tuning, and evaluating progress in the field. We proposed a simple unsupervised metric, and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences. We find that these models rival or surpass all existing metrics in the WMT 2017 sentence-level and system-level tracks, and our trained model has a substantially higher correlation with human judgements than all existing metrics on the WMT 2017 to-English sentence level dataset.
- Published
- 2019
37. Automatic evaluation of the quality of machine translation of a scientific text: the results of a five-year-long experiment
- Author
-
Natalia Ivanova, Alexey Poroykov, Irina Filippova, and Ilya Ulitkin
- Subjects
Machine translation ,Computer science ,business.industry ,media_common.quotation_subject ,Automatic translation ,0211 other engineering and technologies ,02 engineering and technology ,String searching algorithm ,010501 environmental sciences ,Translation (geometry) ,computer.software_genre ,01 natural sciences ,Environmental sciences ,GE1-350 ,Quality (business) ,021108 energy ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing ,0105 earth and related environmental sciences ,media_common - Abstract
We report on various approaches to automatic evaluation of machine translation quality and describe three widely used methods. These methods, i.e. methods based on string matching and n-gram models, make it possible to compare the quality of machine translation to reference translation. We employ modern metrics for automatic evaluation of machine translation quality such as BLEU, F-measure, and TER to compare translations made by Google and PROMT neural machine translation systems with translations obtained 5 years ago, when statistical machine translation and rule-based machine translation algorithms were employed by Google and PROMT, respectively, as the main translation algorithms [6]. The evaluation of the translation quality of candidate texts generated by Google and PROMT with reference translation using an automatic translation evaluation program reveal significant qualitative changes as compared with the results obtained 5 years ago, which indicate a dramatic improvement in the work of the above-mentioned online translation systems. Ways to improve the quality of machine translation are discussed. It is shown that modern systems of automatic evaluation of translation quality allow errors made by machine translation systems to be identified and systematized, which will enable the improvement of the quality of translation by these systems in the future.
- Published
- 2021
38. Word Re-Segmentation in Chinese-Vietnamese Machine Translation
- Author
-
Long H. B. Nguyen, Phuoc Tran, and Dien Dinh
- Subjects
0209 industrial biotechnology ,General Computer Science ,Machine translation ,Computer science ,business.industry ,Speech recognition ,Word error rate ,02 engineering and technology ,Transfer-based machine translation ,computer.software_genre ,Machine translation software usability ,Example-based machine translation ,020901 industrial engineering & automation ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Synchronous context-free grammar ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
In isolated languages, such as Chinese and Vietnamese, words are not separated by spaces, and a word may be formed by one or more syllables. Therefore, word segmentation (WS) is usually the first process that is implemented in the machine translation process. WS in the source and target languages is based on different training corpora, and WS approaches may not be the same. Therefore, the WS that results in these two languages are not often homologous, and thus word alignment results in many 1-n and n-1 alignment pairs in statistical machine translation, which degrades the performance of machine translation. In this article, we will adjust the WS for both Chinese and Vietnamese in particular and for isolated language pairs in general and make the word boundary of the two languages more symmetric in order to strengthen 1-1 alignments and enhance machine translation performance. We have tested this method on the Computational Linguistics Center’s corpus, which consists of 35,623 sentence pairs. The experimental results show that our method has significantly improved the performance of machine translation compared to the baseline translation system, WS translation system, and anchor language-based WS translation systems.
- Published
- 2016
39. A Loss-Augmented Approach to Training Syntactic Machine Translation Systems
- Author
-
Jingbo Zhu, Derek F. Wong, and Tong Xiao
- Subjects
Acoustics and Ultrasonics ,Machine translation ,Computer science ,business.industry ,02 engineering and technology ,computer.software_genre ,Machine learning ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Computational Mathematics ,Rule-based machine translation ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,Feature (machine learning) ,Beam search ,020201 artificial intelligence & image processing ,Artificial intelligence ,Evaluation of machine translation ,Electrical and Electronic Engineering ,Language translation ,0305 other medical science ,business ,computer ,Decoding methods - Abstract
Current syntactic machine translation MT systems implicitly use beam-width unlimited search in learning model parameters e.g., feature values for each translation rule. However, a limited beam-width has to be adopted in decoding new sentences, and the MT output is in general evaluated by various metrics, such as BLEU and TER. In this paper, we address: 1 the mismatch of adopted beam-widths between training and decoding; and 2 the mismatch of training criteria and MT evaluation metrics. Unlike previous work, we model the two problems in a single training paradigm simultaneously. We design a loss-augmented approach that explicitly considers the limited beam-width and evaluation metric in training, and present a simple but effective method to learn the model. By using beam search and BLEU-related losses, our approach improves a state-of-the-art syntactic MT system by +1.0 BLEU on Chinese-to-English and English-to-Chinese translation tasks. It even outperforms seven previous training approaches over 0.8 BLEU points. More interestingly, promising improvements are observed when our approach works with TER.
- Published
- 2016
40. Speech translation system for english to dravidian languages
- Author
-
S. Jothilakshmi and J. Sangeetha
- Subjects
Machine translation ,Computer science ,Speech recognition ,02 engineering and technology ,Hybrid machine translation ,Intelligibility (communication) ,computer.software_genre ,Machine translation software usability ,Example-based machine translation ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Rule-based machine translation ,Artificial Intelligence ,Speech translation ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation of machine translation ,Hidden Markov model ,Prosody ,business.industry ,Dravidian languages ,language.human_language ,ComputingMethodologies_PATTERNRECOGNITION ,Tamil ,Malayalam ,language ,020201 artificial intelligence & image processing ,Artificial intelligence ,Syllable ,0305 other medical science ,business ,computer ,Natural language processing - Abstract
In this paper the Speech-to-Speech Translation (SST) system, which is mainly focused on translation from English to Dravidian languages (Tamil and Malayalam) has been proposed. Three major techniques involved in SST system are Automatic continuous speech recognition, machine translation, and text-to-speech synthesis system. In this paper automatic Continuous Speech Recognition (CSR) has been developed based on the Auto Associative Neural Network (AANN), Support Vector Machine (SVM) and Hidden Markov Model (HMM). The HMM yields better results compared with SVM and AANN. Hence the HMM based Speech recognizer for English language has been taken. We propose a hybrid Machine Translation (MT) system (combination of Rule based and Statistical) for converting English to Dravidian languages text. A syllable based concatenative Text To Speech Synthesis (TTS) for Tamil and Malayalam has been proposed. AANN based prosody prediction has been done for the Tamil language which is used to improve the naturalness and intelligibility. The domain is restricted to sentences that cover the announcements in the railway station, bus stop and airport. This work is framed a novel translation method for English to Dravidian languages. The improved performance of each module HMM based CSR, Hybrid MT and concatenative TTS increases the overall speech translation performance. This proposed speech translation system can be applied to English to any Indian languages if we train and create a parallel corpus for those languages.
- Published
- 2016
41. English-Dogri Translation System using MOSES
- Author
-
S Jamwal Shubhnandan, Avinash Singh, and Asmeet Kour
- Subjects
Translation system ,Machine translation ,Computer science ,business.industry ,Geology ,Geotechnical Engineering and Engineering Geology ,computer.software_genre ,Hardware and Architecture ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Natural language processing ,BLEU - Abstract
The objective behind this paper is to analyze the English-Dogri parallel corpus translation. Machine translation is the translation from one language into another language. Machine translation is the biggest application of the Natural Language Processing (NLP). Moses is statistical machine translation system allow to train translation models for any language pair. We have developed translation system using Statistical based approach which helps in translating English to Dogri and vice versa. The parallel corpus consists of 98,973 sentences. The system gives accuracy of 80% in translating English to Dogri and the system gives accuracy of 87% in translating Dogri to English system.
- Published
- 2016
42. Source Language Adaptation Approaches for Resource-Poor Machine Translation
- Author
-
Preslav Nakov, Hwee Tou Ng, and Pidong Wang
- Subjects
060201 languages & linguistics ,Linguistics and Language ,Machine translation ,business.industry ,Computer science ,06 humanities and the arts ,02 engineering and technology ,Transfer-based machine translation ,computer.software_genre ,Translation (geometry) ,Machine translation software usability ,Language and Linguistics ,Computer Science Applications ,Example-based machine translation ,Rule-based machine translation ,Artificial Intelligence ,0602 languages and literature ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Evaluation of machine translation ,business ,Adaptation (computer science) ,computer ,Natural language processing - Abstract
Most of the world languages are resource-poor for statistical machine translation; still, many of them are actually related to some resource-rich language. Thus, we propose three novel, language-independent approaches to source language adaptation for resource-poor statistical machine translation. Specifically, we build improved statistical machine translation models from a resource-poor language POOR into a target language TGT by adapting and using a large bitext for a related resource-rich language RICH and the same target language TGT. We assume a small POOR–TGT bitext from which we learn word-level and phrase-level paraphrases and cross-lingual morphological variants between the resource-rich and the resource-poor language. Our work is of importance for resource-poor machine translation because it can provide a useful guideline for people building machine translation systems for resource-poor languages. Our experiments for Indonesian/Malay–English translation show that using the large adapted resource-rich bitext yields 7.26 BLEU points of improvement over the unadapted one and 3.09 BLEU points over the original small bitext. Moreover, combining the small POOR–TGT bitext with the adapted bitext outperforms the corresponding combinations with the unadapted bitext by 1.93–3.25 BLEU points. We also demonstrate the applicability of our approaches to other languages and domains.
- Published
- 2016
43. English to Tamil machine translation system using universal networking language
- Author
-
Kashyap Krishnakumar, Rajeswari Sridhar, and Pavithra Sethuraman
- Subjects
Multidisciplinary ,Machine translation ,Computer science ,business.industry ,Speech recognition ,020206 networking & telecommunications ,02 engineering and technology ,Transfer-based machine translation ,computer.software_genre ,Machine translation software usability ,Example-based machine translation ,Universal Networking Language ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Evaluation of machine translation ,business ,computer ,Sentence ,Natural language processing - Abstract
This paper proposes English to Tamil machine translation system, using the universal networking language (UNL) as the intermediate representation. The UNL approach is a hybrid approach of the rule and knowledge-based approaches to machine translation. UNL is a declarative formal language, specifically designed to represent semantic data extracted from a natural language text. The input English sentence is converted to UNL (enconversion), which is then converted to a Tamil sentence (deconversion) by ensuring that the meaning of the input sentence is preserved. The representation of UNL was modified to suit the translation process. A new sentence formation algorithm was also proposed to rearrange the translated Tamil words to sentences. The translation system was evaluated using bilingual evaluation understudy (BLEU) score. A BLEU score of 0.581 was achieved, which is an indication that most of the information in the input sentence is retained in the translated sentence. The scores obtained using the UNL based approach were compared with existing approaches to translation, and it can be concluded that the UNL is a more suited approach to machine translation.
- Published
- 2016
44. A deep source-context feature for lexical selection in statistical machine translation
- Author
-
Marta R. Costa-jussà, Rafael E. Banchs, Parth Gupta, Paolo Rosso, Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, and Universitat Politècnica de Catalunya. VEU - Grup de Tractament de la Parla
- Subjects
Phrase ,Machine translation ,Computer science ,Speech recognition ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,Semantics ,01 natural sciences ,Machine translation software usability ,Example-based machine translation ,Lexical selection ,Rule-based machine translation ,Informàtica [Àrees temàtiques de la UPC] ,Artificial Intelligence ,Traducció automàtica ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation of machine translation ,0105 earth and related environmental sciences ,BLEU ,business.industry ,Natural language processing ,Neural nets and related approaches ,Transfer-based machine translation ,Signal Processing ,020201 artificial intelligence & image processing ,Synchronous context-free grammar ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,LENGUAJES Y SISTEMAS INFORMATICOS ,computer ,Software ,Sentence - Abstract
This paper presents a methodology to address lexical disambiguation in a standard phrase-based statistical machine translation system. Similarity among source contexts is used to select appropriate translation units. The information is introduced as a novel feature of the phrase-based model and it is used to select the translation units extracted from the training sentence more similar to the sentence to translate. The similarity is computed through a deep autoencoder representation, which allows to obtain effective lowdimensional embedding of data and statistically significant BLEU score improvements on two different tasks (English-to-Spanish and English-to-Hindi). © 2016 Elsevier B.V. All rights reserved., The work of the first author has been supported by FPI UPV pre-doctoral grant (num. registro - 3505). The work of the second author has been supported by Spanish Ministerio de Economia y Competitividad, contract TEC2015-69266-P and the Seventh Framework Program of the European Commission through the International Outgoing Fellowship Marie Curie Action (IMTraP-2011-29951). The work of the third author has been supported by the Spanish Ministerio de Economia y Competitividad, SomEMBED TIN2015-71147-C2-1-P research project and by the Generalitat Valenciana under the grant ALMAPATER (PrometeoII/2014/030).
- Published
- 2016
45. Quantum neural network based machine translator for English to Hindi
- Author
-
V. P. Singh, Ravi Narayan, and Snehashish Chakraverty
- Subjects
0209 industrial biotechnology ,Machine translation ,Computer science ,Speech recognition ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,02 engineering and technology ,computer.software_genre ,Machine translation software usability ,Example-based machine translation ,020901 industrial engineering & automation ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,Semantic translation ,Evaluation of machine translation ,BLEU ,Hindi ,business.industry ,Transfer-based machine translation ,language.human_language ,language ,Computer-assisted translation ,NIST ,020201 artificial intelligence & image processing ,Synchronous context-free grammar ,Artificial intelligence ,business ,computer ,Software ,Natural language processing - Abstract
This paper presents the machine translation system for English to Hindi which is based on the concept of machine learning of semantically correct corpus. The machine learning process is based on quantum neural network (QNN), which is a novel approach to recognize and learn the corpus pattern in a realistic way. It presents the structure of the system, machine translation system and the performance results. System performs the task of translation using its knowledge gained during learning by inputting pair of sentences from source to target language i.e. English and Hindi. Like a person, the system also acquires the necessary knowledge required for translation in implicit form by inputting pair of sentences. The effectiveness of the proposed approach has been analyzed by using 4600 sentences of news items from various newspapers and from Brown Cuprous. During simulations and evaluation, BLEU score achieved 0.9814 accuracy, NIST score achieved 7.3521, ROUGE-L score achieved 0.9887, METEOR score achieved 0.7254 and human based evaluation achieved 98.261%. The proposed system achieved significantly higher accuracy than AnglaMT, Anuvadaksh, Bing and Google Translation.
- Published
- 2016
46. A Study of Statistical Machine Translation Methods for Under Resourced Languages
- Author
-
Win Pa Pa, Eiichiro Sumita, Andrew Finch, and Ye Kyaw Thu
- Subjects
Phrase ,Machine translation ,Translation language ,Computer science ,Speech recognition ,02 engineering and technology ,computer.software_genre ,Machine translation software usability ,Example-based machine translation ,03 medical and health sciences ,0302 clinical medicine ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation of machine translation ,Operation Sequence Model ,Syntax-based ,General Environmental Science ,BLEU ,business.industry ,Phrase-based ,Hierarchical Phrase-based ,030221 ophthalmology & optometry ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Synchronous context-free grammar ,Artificial intelligence ,business ,computer ,Under resourced languages ,Word (computer architecture) ,Natural language processing - Abstract
This paper contributes an empirical study of the application of five state-of-the-art machine translation to the trans- lation of low-resource languages. The methods studied were phrase-based, hierarchical phrase-based, the operational sequence model, string-to-tree, tree-to-string statistical machine translation methods between English (en) and the under resourced languages Lao (la), Myanmar (mm), Thai (th) in both directions. The performance of the machine translation systems was automatically measured in terms of BLEU and RIBES for all experiments. Our main findings were that the phrase-based SMT method generally gave the highest BLEU scores. This was counter to expectations, and we believe indicates that this method may be more robust to limitations on the data set size. However, when evaluated with RIBES, the best scores came from methods other than phrase-based SMT, indicating that the other methods were able to handle the word re-ordering better even under the constraint of limited data. Our study achieved the highest reported results on the data sets for all translation language pairs.
- Published
- 2016
- Full Text
- View/download PDF
47. AUTOMATIC EVALUATION OF MACHINE TRANSLATION QUALITY OF A SCIENTIFIC TEXT
- Author
-
Ilya Ulitkin
- Subjects
010302 applied physics ,ROUGE ,business.industry ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,021001 nanoscience & nanotechnology ,computer.software_genre ,01 natural sciences ,Machine translation software usability ,0103 physical sciences ,Quality (business) ,Artificial intelligence ,Evaluation of machine translation ,0210 nano-technology ,business ,computer ,Natural language processing ,media_common - Published
- 2016
48. Assessment of Multi-Engine Machine Translation for English to Hindi Language (MEMTEHiL)
- Author
-
Pankaj K. Goswami, Sanjay K. Dwivedi, and C. K. Jha
- Subjects
060201 languages & linguistics ,Hindi ,Machine translation ,business.industry ,Computer science ,06 humanities and the arts ,02 engineering and technology ,computer.software_genre ,Machine translation software usability ,language.human_language ,Example-based machine translation ,Fluency ,Rule-based machine translation ,0602 languages and literature ,0202 electrical engineering, electronic engineering, information engineering ,language ,Computer-assisted translation ,020201 artificial intelligence & image processing ,Evaluation of machine translation ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
English to Hindi translation of the computer-science related e-content, generated through an online freely available machine translation engine may not be technically correct. The expected target translation should be as fluent as intended for the native learners and the meaning of a source e-content should be conveyed properly. A Multi-Engine Machine Translation for English to Hindi Language (MEMTEHiL) framework has been designed and integrated by the authors as a translation solution for the computer science domain e-content. It was possible by enabling the use of well-tested approaches of machine translation. The humanly evaluated and acceptable metrics like fluency and adequacy (F&A) were used to assess the best translation quality for English to Hindi language pair. Besides humanly-judged metrics, another well-tested and existing interactive version of Bi-Lingual Evaluation Understudy (iBLEU) was used for evaluation. Authors have incorporated both parameters (F&A and iBLEU) for assessing the quality of translation as regenerated by the designed MEMTEHiL.
- Published
- 2016
49. Building a Bidirectional English-Vietnamese Statistical Machine Translation System by Using MOSES
- Author
-
Yingxiu Quan, Nguyen Quang Phuoc, and Cheol-Young Ock
- Subjects
Machine translation ,Computer science ,Vietnamese ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,02 engineering and technology ,computer.software_genre ,Machine translation software usability ,Example-based machine translation ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation of machine translation ,business.industry ,language.human_language ,Linguistics ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,language ,Computer-assisted translation ,020201 artificial intelligence & image processing ,Language model ,Artificial intelligence ,0305 other medical science ,business ,computer ,Natural language processing - Abstract
In this paper, we describe an attempt at constructing a bidirectional English-Vietnamese statistical machine translation system using MOSES, an open-source toolkit for statistical machine translation. The quality of a statistical machine translation system depends on the bilingual sentence alignment data called parallel corpus. However, Vietnamese is an under-resourced language that is largely left out of initial corpus building efforts. Therefore, we concentrate on building Vietnamese corpora which consist over 880,000 English-Vietnamese sentence pairs and over 11,000,000 Vietnamese monolingual sentences to train the statistical translation model and the language model respectively. According to the obtained BLEU scores, the proposed system outperforms the Google translator and the Microsoft Bing translator in both English to Vietnamese and Vietnamese to English translating.
- Published
- 2016
50. Using Dictionary and Lemmatizer to Improve Low Resource English-Malay Statistical Machine Translation System
- Author
-
Tien-Ping Tan, Yin-Lai Yeong, and Siti Khaotijah Mohammad
- Subjects
Machine translation ,Computer science ,Speech recognition ,parallel corpus ,02 engineering and technology ,computer.software_genre ,Machine translation software usability ,lemmatization ,Domain (software engineering) ,Example-based machine translation ,English-Malay ,Rule-based machine translation ,0202 electrical engineering, electronic engineering, information engineering ,Evaluation of machine translation ,Statistical machine tranlstion ,General Environmental Science ,BLEU ,Malay ,business.industry ,Lemmatisation ,Bilingual dictionary ,020206 networking & telecommunications ,language.human_language ,Machine-readable dictionary ,language ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Natural language processing ,dictionary - Abstract
Statistical Machine Translation (SMT) is one of the most popular methods for machine translation. In this work, we carried out English-Malay SMT by acquiring an English-Malay parallel corpus in computer science domain. On the other hand, the training parallel corpus is from a general domain. Thus, there will be a lot of out of vocabulary during translation. We attempt to improve the English-Malay SMT in computer science domain using a dictionary and an English lemmatizer. Our study shows that a combination of approach using bilingual dictionary and English lemmatization improves the BLEU score for English to Malay translation from 12.90 to 15.41.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.