MGB-5 recipe

You can find the MGB-5 ASR baseline system here.

You can find the MGB-5 ADI17 baseline system here.

MGB-3 recipe

You can find the MGB-3 ADI5 baseline system here.

MGB-2 recipe

The following recipe reflects the JHU system for the MGB-2 data.

The following GitHub Repoistory has a Kaldi recipe for sequence-trained DNN. The recipe is using 250 hours for AM training, and the corresponding text for LM training, The baseline results were reported on the 10 hours verbatim transcribed development set: 34% (8.5 hours) for the non-overlap speech and 73% (1.5 hours) for the overlap speech.

MGB-1 recipe