Representing a corpus

In Lhotse, we represent the data using a small number of Python classes, enhanced with methods for solving common data manipulation tasks, that can be stored as JSON or JSONL manifests. For most audio corpora, we will need two types of manifests to fully describe them: a recording manifest and a supervision manifest.

Recording manifest

class, sources, sampling_rate, num_samples, duration, transforms=None)[source]

The Recording manifest describes the recordings in a given corpus. It contains information about the recording, such as its path(s), duration, the number of samples, etc. It allows to represent multiple channels coming from one or more files.

This manifest does not specify any segmentation information or supervision such as the transcript or the speaker – we use SupervisionSegment for that.

Note that Recording can represent both a single utterance (e.g., in LibriSpeech) and a 1-hour session with multiple channels and speakers (e.g., in AMI). In the latter case, it is partitioned into data suitable for model training using Cut.


Lhotse reads audio recordings using pysoundfile and audioread, similarly to librosa, to support multiple audio formats. For OPUS files we require ffmpeg to be installed.


Since we support importing Kaldi data dirs, if wav.scp contains unix pipes, Recording will also handle them correctly.


A Recording can be simply created from a local audio file:

>>> from lhotse import RecordingSet, Recording, AudioSource
>>> recording = Recording.from_file('meeting.wav')
>>> recording
    sources=[AudioSource(type='file', channels=[0], source='meeting.wav')],

This manifest can be easily converted to a Python dict and serialized to JSON/JSONL/YAML/etc:

>>> recording.to_dict()
{'id': 'meeting',
 'sources': [{'type': 'file',
   'channels': [0],
   'source': 'meeting.wav'}],
 'sampling_rate': 16000,
 'num_samples': 57600000,
 'duration': 3600.0}

Recordings can be also created programatically, e.g. when they refer to URLs stored in S3 or somewhere else:

>>> s3_audio_files = ['s3://my-bucket/123-5678.flac', ...]
>>> recs = RecordingSet.from_recordings(
...     Recording(
...         id=url.split('/')[-1].replace('.flac', ''),
...         sources=[AudioSource(type='url', source=url, channels=[0])],
...         sampling_rate=16000,
...         num_samples=get_num_samples(url),
...         duration=get_duration(url)
...     )
...     for url in s3_audio_files
... )

It allows reading a subset of the audio samples as a numpy array:

>>> samples = recording.load_audio()
>>> assert samples.shape == (1, 16000)
>>> samples2 = recording.load_audio(offset=0.5)
>>> assert samples2.shape == (1, 8000)

RecordingSet represents a collection of recordings, indexed by recording IDs. It does not contain any annotation such as the transcript or the speaker identity – just the information needed to retrieve a recording such as its path, URL, number of channels, and some recording metadata (duration, number of samples).

It also supports (de)serialization to/from YAML/JSON/etc. and takes care of mapping between rich Python classes and YAML/JSON/etc. primitives during conversion.

When coming from Kaldi, think of it as wav.scp on steroids: RecordingSet also has the information from reco2dur and reco2num_samples, is able to represent multi-channel recordings and read a specified subset of channels, and support reading audio files directly, via a unix pipe, or downloading them on-the-fly from a URL (HTTPS/S3/Azure/GCP/etc.).


RecordingSet can be created from an iterable of Recording objects:

>>> from lhotse import RecordingSet
>>> audio_paths = ['123-5678.wav', ...]
>>> recs = RecordingSet.from_recordings(Recording.from_file(p) for p in audio_paths)

As well as from a directory, which will be scanned recursively for files with parallel processing:

>>> recs2 = RecordingSet.from_dir('/data/audio', pattern='*.flac', num_jobs=4)

It behaves similarly to a dict:

>>> '123-5678' in recs
>>> recording = recs['123-5678']
>>> for recording in recs:
>>>    pass
>>> len(recs)

It also provides some utilities for I/O:

>>> recs.to_file('recordings.jsonl')
>>> recs.to_file('recordings.json.gz')  # auto-compression
>>> recs2 = RecordingSet.from_file('recordings.jsonl')


>>> longer_than_5s = recs.filter(lambda r: r.duration > 5)
>>> first_100 = recs.subset(first=100)
>>> split_into_4 = recs.split(num_splits=4)
>>> shuffled = recs.shuffle()

And lazy data augmentation/transformation, that requires to adjust some information in the manifest (e.g., num_samples or duration). Note that in the following examples, the audio is untouched – the operations are stored in the manifest, and executed upon reading the audio:

>>> recs_sp = recs.perturb_speed(factor=1.1)
>>> recs_vp = recs.perturb_volume(factor=2.)
>>> recs_24k = recs.resample(24000)

Supervision manifest

class lhotse.supervision.SupervisionSegment(id, recording_id, start, duration, channel=0, text=None, language=None, speaker=None, gender=None, custom=None, alignment=None)[source]

SupervisionSegment represents a time interval (segment) annotated with some supervision labels and/or metadata, such as the transcription, the speaker identity, the language, etc.

Each supervision has unique id and always refers to a specific recording (via recording_id) and a specific channel (by default, 0). It’s also characterized by the start time (relative to the beginning of a Recording or a Cut) and a duration, both expressed in seconds.

The remaining fields are all optional, and their availability depends on specific corpora. Since it is difficult to predict all possible types of metadata, the custom field (a dict) can be used to insert types of supervisions that are not supported out of the box.

SupervisionSegment may contain multiple types of alignments. The alignment field is a dict, indexed by alignment’s type (e.g., word or phone), and contains a list of AlignmentItem objects – simple structures that contain a given symbol and its time interval. Alignments can be read from CTM files or created programatically.


A simple segment with no supervision information:

>>> from lhotse import SupervisionSegment
>>> sup0 = SupervisionSegment(
...     id='rec00001-sup00000', recording_id='rec00001',
...     start=0.5, duration=5.0, channel=0
... )

Typical supervision containing transcript, speaker ID, gender, and language:

>>> sup1 = SupervisionSegment(
...     id='rec00001-sup00001', recording_id='rec00001',
...     start=5.5, duration=3.0, channel=0,
...     text='transcript of the second segment',
...     speaker='Norman Dyhrentfurth', language='English', gender='M'
... )

Two supervisions denoting overlapping speech on two separate channels in a microphone array/multiple headsets (pay attention to start, duration, and channel):

>>> sup2 = SupervisionSegment(
...     id='rec00001-sup00002', recording_id='rec00001',
...     start=15.0, duration=5.0, channel=0,
...     text="i have incredibly good news for you",
...     speaker='Norman Dyhrentfurth', language='English', gender='M'
... )
>>> sup3 = SupervisionSegment(
...     id='rec00001-sup00003', recording_id='rec00001',
...     start=18.0, duration=3.0, channel=1,
...     text="say what",
...     speaker='Hervey Arman', language='English', gender='M'
... )

A supervision with a phone alignment:

>>> from lhotse.supervision import AlignmentItem
>>> sup4 = SupervisionSegment(
...     id='rec00001-sup00004', recording_id='rec00001',
...     start=33.0, duration=1.0, channel=0,
...     text="ice",
...     speaker='Maryla Zechariah', language='English', gender='F'
...     alignment={
...         'phone': [
...             AlignmentItem(symbol='AY0', start=33.0, duration=0.6),
...             AlignmentItem(symbol='S', start=33.6, duration=0.4)
...         ]
...     }
... )

Converting SupervisionSegment to a dict:

>>> sup0.to_dict()
{'id': 'rec00001-sup00000', 'recording_id': 'rec00001', 'start': 0.5, 'duration': 5.0, 'channel': 0}
class lhotse.supervision.SupervisionSet(segments)[source]

SupervisionSet represents a collection of segments containing some supervision information (see SupervisionSegment), that are indexed by segment IDs.

It acts as a Python dict, extended with an efficient find operation that indexes and caches the supervision segments in an interval tree. It allows to quickly find supervision segments that correspond to a specific time interval.

When coming from Kaldi, think of SupervisionSet as a segments file on steroids, that may also contain text, utt2spk, utt2gender, utt2dur, etc.


Building a SupervisionSet:

>>> from lhotse import SupervisionSet, SupervisionSegment
>>> sups = SupervisionSet.from_segments([SupervisionSegment(...), ...])

Writing/reading a SupervisionSet:

>>> sups.to_file('supervisions.jsonl.gz')
>>> sups2 = SupervisionSet.from_file('supervisions.jsonl.gz')

Using SupervisionSet like a dict:

>>> 'rec00001-sup00000' in sups
>>> sups['rec00001-sup00000']
SupervisionSegment(id='rec00001-sup00000', recording_id='rec00001', start=0.5, ...)
>>> for segment in sups:
...     pass

Searching by recording_id and time interval:

>>> matched_segments = sups.find(recording_id='rec00001', start_after=17.0, end_before=25.0)


>>> longer_than_5s = sups.filter(lambda s: s.duration > 5)
>>> first_100 = sups.subset(first=100)
>>> split_into_4 = sups.split(num_splits=4)
>>> shuffled = sups.shuffle()

Standard data preparation recipes

We provide a number of standard data preparation recipes. By that, we mean a collection of a Python function + a CLI tool that create the manifests given a corpus directory.

Currently supported corpora

Corpus name







CallHome Egyptian

CallHome English

CMU Arctic

CMU Indic

CMU Kids




English Broadcast News 1997

Fisher English Part 1, 2

Fisher Spanish

GALE Arabic Broadcast Speech

GALE Mandarin Broadcast Speech




L2 Arctic


LibriSpeech (including “mini”)


LJ Speech




Multilingual LibriSpeech (MLS)


National Speech Corpus (Singaporean English)






Adding new corpora


Python data preparation recipes. Each corpus has a dedicated Python file in lhotse/recipes, which you can use as the basis for your own recipe.


(optional) Downloading utility. For publicly available corpora that can be freely downloaded, we usually define a function called download_<corpus-name>().


Data preparation Python entry-point. Each data preparation recipe should expose a single function called prepare_<corpus-name>, that produces dicts like: {'recordings': <RecordingSet>, 'supervisions': <SupervisionSet>}.


CLI recipe wrappers. We provide a command-line interface that wraps the download and prepare functions – see lhotse/bin/modes/recipes for examples of how to do it.


Pre-defined train/dev/test splits. When a corpus defines standard split (e.g. train/dev/test), we return a dict with the following structure: {'train': {'recordings': <RecordingSet>, 'supervisions': <SupervisionSet>}, 'dev': ...}


Isolated utterance corpora. Some corpora (like LibriSpeech) come with pre-segmented recordings. In these cases, the SupervisionSegment will exactly match the Recording duration (and there will likely be exactly one segment corresponding to any recording).


Conversational corpora. Corpora with longer recordings (e.g. conversational, like Switchboard) should have exactly one Recording object corresponding to a single conversation/session, that spans its whole duration. Each speech segment in that recording should be represented as a SupervisionSegment with the same recording_id value.


Multi-channel corpora. Corpora with multiple channels for each session (e.g. AMI) should have a single Recording with multiple AudioSource objects – each corresponding to a separate channel. Remember to make the SupervisionSegment objects correspond to the right channels!