Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Sunday, 22 December 2024

How to Prepare a French-to-English Dataset for Machine Translation

 Machine translation is the challenging task of converting text from a source language into coherent and matching text in a target language.

Neural machine translation systems such as encoder-decoder recurrent neural networks are achieving state-of-the-art results for machine translation with a single end-to-end system trained directly on source and target language.

Standard datasets are required to develop, explore, and familiarize yourself with how to develop neural machine translation systems.

In this tutorial, you will discover the Europarl standard machine translation dataset and how to prepare the data for modeling.

After completing this tutorial, you will know:

  • The Europarl dataset comprised of the proceedings from the European Parliament in a host of 11 languages.
  • How to load and clean the parallel French and English transcripts ready for modeling in a neural machine translation system.
  • How to reduce the vocabulary size of both French and English data in order to reduce the complexity of the translation task.

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Prepare a French-to-English Dataset for Machine Translation

How to Prepare a French-to-English Dataset for Machine Translation
Photo by Giuseppe Milo, some rights reserved.

Tutorial Overview

This tutorial is divided into 5 parts; they are:

  1. Europarl Machine Translation Dataset
  2. Download French-English Dataset
  3. Load Dataset
  4. Clean Dataset
  5. Reduce Vocabulary

Python Environment

This tutorial assumes you have a Python SciPy environment installed with Python 3 installed.

The tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help with your environment, see this post:

Need help with Deep Learning for Text Data?

Take my free 7-day email crash course now (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Europarl Machine Translation Dataset

The Europarl is a standard dataset used for statistical machine translation, and more recently, neural machine translation.

It is comprised of the proceedings of the European Parliament, hence the name of the dataset as the contraction Europarl.

The proceedings are the transcriptions of speakers at the European Parliament, which are translated into 11 different languages.

It is a collection of the proceedings of the European Parliament, dating back to 1996. Altogether, the corpus comprises of about 30 million words for each of the 11 official languages of the European Union

— Europarl: A Parallel Corpus for Statistical Machine Translation, 2005.

The raw data is available on the European Parliament website in HTML format.

The creation of the dataset was lead by Philipp Koehn, author of the book “Statistical Machine Translation.”

The dataset was made available for free to researchers on the website “European Parliament Proceedings Parallel Corpus 1996-2011,” and often appears as a part of machine translation challenges, such as the Machine Translation task in the 2014 Workshop on Statistical Machine Translation.

The most recent version of the dataset is version 7, released in 2012, comprised of data from 1996 to 2011.

Download French-English Dataset

We will focus on the parallel French-English dataset.

This is a prepared corpus of aligned French and English sentences recorded between 1996 and 2011.

The dataset has the following statistics:

  • Sentences: 2,007,723
  • French words: 51,388,643
  • English words: 50,196,035

You can download the dataset from here:

Once downloaded, you should have the file “fr-en.tgz” in your current working directory.

You can unzip this archive file using the tar command, as follows:

You will now have two files, as follows:

  • English: europarl-v7.fr-en.en (288M)
  • French: europarl-v7.fr-en.fr (331M)

Below is a sample of the English file.

Below is a sample of the French file.

Load Dataset

Let’s start off by loading the data files.

We can load each file as a string. Because the files contain unicode characters, we must specify an encoding when loading the files as text. In this case, we will use UTF-8 that will easily handle the unicode characters in both files.

The function below, named load_doc(), will load a given file and return it as a blob of text.

Next, we can split the file into sentences.

Generally, one utterance is stored on each line. We can treat these as sentences and split the file by new line characters. The function to_sentences() below will split a loaded document.

When preparing our model later, we will need to know the length of sentences in the dataset. We can write a short function to calculate the shortest and longest sentences.

We can tie all of this together to load and summarize the English and French data files. The complete example is listed below.

Running the example summarizes the number of lines or sentences in each file and the length of the longest and shortest lines in each file.

Importantly, we can see that the number of lines 2,007,723 matches the expectation.

Clean Dataset

The data needs some minimal cleaning before being used to train a neural translation model.

Looking at some samples of text, some minimal text cleaning may include:

  • Tokenizing text by white space.
  • Normalizing case to lowercase.
  • Removing punctuation from each word.
  • Removing non-printable characters.
  • Converting French characters to Latin characters.
  • Removing words that contain non-alphabetic characters.

These are just some basic operations as a starting point; you may know of or require more elaborate data cleaning operations.

The function clean_lines() below implements these cleaning operations. Some notes:

  • We use the unicode API to normalize unicode characters, which converts French characters to Latin equivalents.
  • We use an inverse regex match to retain only those characters in words that are printable.
  • We use a translation table to translate characters as-is, but exclude all punctuation characters.

Once normalized, we save the lists of clean lines directly in binary format using the pickle API. This will speed up loading for further operations later and in the future.

Reusing the loading and splitting functions developed in the previous sections, the complete example is listed below.

After running, the clean sentences are saved in english.pkl and french.pkl files respectively.

As part of the run, we also print the first few lines of each list of clean sentences, reproduced below.

English:

French:

My reading of French is very limited, but at least as the English is concerned, further improves could be made, such as dropping or concatenating hanging ‘s‘ characters for plurals.

Reduce Vocabulary

As part of the data cleaning, it is important to constrain the vocabulary of both the source and target languages.

The difficulty of the translation task is proportional to the size of the vocabularies, which in turn impacts model training time and the size of a dataset required to make the model viable.

In this section, we will reduce the vocabulary of both the English and French text and mark all out of vocabulary (OOV) words with a special token.

We can start by loading the pickled clean lines saved from the previous section. The load_clean_sentences() function below will load and return a list for a given filename.

Next, we can count the occurrence of each word in the dataset. For this we can use a Counter object, which is a Python dictionary keyed on words and updates a count each time a new occurrence of each word is added.

The to_vocab() function below creates a vocabulary for a given list of sentences.

We can then process the created vocabulary and remove all words from the Counter that have an occurrence below a specific threshold.

The trim_vocab() function below does this and accepts a minimum occurrence count as a parameter and returns an updated vocabulary.

Finally, we can update the sentences, remove all words not in the trimmed vocabulary and mark their removal with a special token, in this case, the string “unk“.

The update_dataset() function below performs this operation and returns a list of updated lines that can then be saved to a new file.

We can tie all of this together and reduce the vocabulary for both the English and French dataset and save the results to new data files.

We will use a min occurrence of 5, but you are free to explore other min occurrence counts suitable for your application.

The complete code example is listed below.

First, the size of the English vocabulary is reported followed by the updated size. The updated dataset is saved to the file ‘english_vocab.pkl‘ and a spot check of some updated examples with out of vocabulary words replace with “unk” are printed.

We can see that the size of the vocabulary was shrunk by about half to a little over 40,000 words.

The same procedure is then performed on the French dataset, saving the result to the file ‘french_vocab.pkl‘.

We see a similar shrinking of the size of the French vocabulary.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered the Europarl machine translation dataset and how to prepare the data ready for modeling.

Specifically, you learned:

  • The Europarl dataset comprised of the proceedings from the European Parliament in a host of 11 languages.
  • How to load and clean the parallel French and English transcripts ready for modeling in a neural machine translation system.
  • How to reduce the vocabulary size of both French and English data in order to reduce the complexity of the translation task.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

No comments:

Post a Comment

Connect broadband

AI:Lord Krishna Unventional and DEceptive methodlogies

Lord Krishna's actions during the Kurukshetra War, particularly his role in the deaths of Bhishma, Drona, and Karna, are often seen thro...