Background

The Arabic track for the 2017 multi-dialect multi-genre evaluation (MGB-3) is an extension for the 2016 evaluation (MGB-2).

In addition to the 1,200 hours used in 2016 from Aljazeera TV programs, this year will explore multi-genre data; comedy, cooking, cultural, environment, family-kids, fashion, movies-drama, sports and science talks (TEDX).

MGB-3: This year we are using 16 hours multi-genre data collected from different YouTube channels. The 16 hours have been manually transcribed. The chosen Arabic dialect for this year is Egyptian. Given that dialectal Arabic has no orthographic rules, each program has been transcribed by four different transcribers using this transcription guidelines. The MGB-3 data is split into three groups; adaptation, development and evaluation data which will be shared at the evaluation time as shown in the dates.

MGB-2: The 1,200 hours from Aljazeera TV programs have been manually captioned with no timing information. QCRI Arabic ASR system has been used to recognize all programs. The ASR output was used to align the manual captioning and produce speech segments for training speech recognition. More than 20 hours from 2015 programs have been transcribed verbatim and manually segmented. This data is split into a development set of 10 hours, and a similar evaluation set of 10 hours. Both the development and evaluation data have been released in the 2016 MGB challenge. The same evaluation set will be used this year.

MGB-2 Data

Data provided includes:
  • Approximately 1,200 hours of Arabic broadcast data, obtained from about 4,000 programmes broadcast on Aljazeera Arabic TV channel over a span of 10 years, from 2005 until September 2015.
  • Time-aligned transcription as an output from light supervised alignment, with a varying quality of human transcription for the whole episode.
  • More than 110 million words of Aljazeera.net website collected between 2004, and the year of 2011.

Metadata for each program include title, genre tag, and date/time of transmission. The original set of data for this period contained about 1,500 hours of audio, obtained from all shows; we have removed programmes with damaged aligned transcriptions. the aligned segmented transcription will be shared as well as the original raw transcrption (which has no time information).

Data: Description of the provided data

For each program, we will share the following:

  • The original raw transcription from Aljazeera as it shown on the Aljazeera website. The Arabic text in each file will be in UTF8 encoding.
  • XML including time information for each segments, as well as title, genre tag, and date/time of transmission about the program in Buckwalter transliteration format.

A sample of a training audio file from the training data is available. There is also a corresponding raw transcription and an aligned segmented transcription

MGB-3 Data

Egyptian broadcast data collected from YouTube.

This year, we collected about 80 programs from different YouTube channels. The first 12 minutes from each program has been transcribed and released. This sums up to roughly 16 hours in total divided as follow:
  • Adaptation: 12 minutes * 24 programs.
  • Development: 12 minutes * 24 programs .
  • Evaluation: 12 minutes * 31 programs

All programs have been transcribed by four different annotators to explore the non-orthographic nature of the dialectal Arabic.

Data: Description of the provided data

For each program, we will share the following:

  • The original raw transcription for the four annotators. The Arabic text in each file will be in UTF8 encoding.
  • We will also share both the segments and the text file for the transcription in Buckwalter transliteration format. We follow the standard kaldi data format.

A sample of a audio file from the daptation data is available. There is also a corresponding raw UTF-8 transcription, and buckwalter transcription. Finally the segments information.

Evaluation tasks

Particpants can enter any of two tasks:

  1. Speech-to-text transcription of broadcast data
  2. Arabic Dialect Identification of Arabic audio. For this task, we are releasing 10 hours per dialect. We provide data for five Arabic dialects: Egyptian (EGY), Levantine (LAV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA).The data comes from broadcast news.
Tasks are described in more detail below. Each task has one or more primary evaluation conditions and possibly a number of contrastive conditions. To enter a task, participants must submit at least one system which fulfils the primary evaluation conditions. Note that signing the MGB challenge data license requires you to participate in at least one task.

Scoring tools for all tasks will be available on Github repository. We will release the multi-reference word error rate (MR-WER) code to evaluate the MGB-3 using multiple transcriptions.

Rules for all tasks
  • Only audio data and language model data supplied by the organisers can be used for transcription and alignment tasks. All metadata supplied with training data can be used.

  • Any lexicon can be used.
Transcription

This is a standard speech transcription task operating on a collection of whole TV shows drawn from diverse genres. Scoring will require ASR output with word-level timings. Segments with overlap speech will be ignored for scoring purposes (where overlap is defined to minimise the regions removed - at segment level where possible). Speaker labels are not required in the hypothesis for scoring. Usual NIST-style mappings will be used to normalise reference/hypothesis. In the MGB-3 competition, we will share the multiple reference word error rate (MR-WER) to explore the non-orthographic aspect in dialectal Arabic for scoring.

MGB-2: For the evaluation data, show titles and genre labels will be supplied. Some titles will have appeared in the training data, and some will be new. All genre labels will have been seen in the training data. The supplied title and genre information can be used as much as desired. Other metadata present in the development data will not be supplied for the evaluation data, but this does not preclude, for example, the usage of metadata for the development set to infer properties of shows with the same title in the evaluation data.

There will be shared speakers across training and evaluation data. It is possible for participants to automatically identify these themselves and make use of the information. However, each program in the evaluation set should be processed independently.

MGB3: This year, we are releasing 5 hours for adaption and 5 hours for development to explore using them to get better results on dialectal data such as Egyptian comedy. We assume the MGB-3 data is not enough by itself to build robust Arabic Speech recognition system, but could be quite useful for adaptation, and hyper-parameter tuning for models built using the MGB-2 data.

Arabic Dialect Identification

In this task, participants will be supplied with more than 50 hours labeled for each dialect. This will be divided across the five major Arabic dialects; Egyptian (EGY), Levantine (LAV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Participants are encouraged to use the 10 hours per dialect to label more data from both the MGB-2 and MGB-3 data. Dialectal data and baseline code will be shared on QCRI dialect ID Github . The overal accuracy will be used for the evaluation criteria across the five dialects. The test data which will be shared at the evaluation time as shown in the dates section. Participants should specify one dialect for each audio file.