site navigation

Berlin EmoDB

A database of German emotional speech

Last update: 10/1/05

If you use the database for your research, please cite the following article:
Felix Burkhardt, Astrid Paeschke, Miriam Rolfes, Walther F. Sendlmeier and Benjamin Weiss: A Database of German Emotional Speech, Proc. Interspeech 2005, PDF

Description of the database

The aim of the project Phonetic Reductions and Elaborations in Emotional Speech, done by the institute of communication science of the TU-Berlin (Technical University of Berlin) and funded by the German Reseach Community (DFG) is to examine acoustical correlates of emotional speech.
An emotional database comprising 6 basic emotions (anger, joy, sadness, fear, disgust and boredom) as well as neutral speech was recorded.
Ten professional native German actors (5 female and 5 male) simulated these emotions, producing 10 utterances (5 short and 5 longer sentences), which could be used in every-day communication and are interpretable in all applied emotions.
The recordings were made using a Sennheiser MKH 40 P 48 microphone and a Tascam DA P1 portable DAT-recorder in an anechoic chamber.
In addition electro-glottograms were recorded using the portable Laryngograph (Laryngograph LTD.).
The recorded speech material of about 800 sentences (7 emotions * 10 actors * 10 sentences + some second versions) are now getting evaluated with respect to recognizability and naturalness in a forced-choice automated listening-test by 20-30 judges.
Those utterances for which the emotion was recognized by at least 80 % of the listeners will be used for further analysis.
The chosen sentences get phonetically labeled in a narrow transcription, with special markers for voicequality, settings and articulatory features.
On the basis of this database analysis of distinguishable features of the specific emotions, each compared with neutral (unemotional) way of speaking will be performed.
The perceptive relevance of typical features of the emotions are getting evaluated by means of speech-resynthesis, allowing the controled variation of specific features.

actor in the anechoic chamber An actor in the anechoic chamber. (click for a larger version)

Here's an example of (from top to bottom) the oszillogram, spectrogram, laryngogram and labelfiles from a sentence spoken in a sad mood.

oszillogram, spectrogram, laryngogram and labelfiles from a sentence spoken in a sad mood
click for a full picture (750 kB).

these are examples of oszillograms and broadband-spectrograms from the database (always the same sentence, different male speakers)
neutral spectrogram
neutral, wav-file (~100kB)
bored spectrogram
bored, wav-file (~100kB)
angry spectrogram
angry, wav-file (~100kB)
happy spectrogram
happy, wav-file (~100kB)
sad spectrogram
sad, wav-file (~100kB)
frightened spectrogram
frightened, wav-file (~100kB)

Here are examples of the samples for all emotions, each produced by male and female speaker (all stimuli were recognised by at least 80 % of the listeners)
The english translation would be something like: In seven hours it will happen.

(wav, Sr=16 kHz)

female wav-file wav-file wav-file wav-file wav-file wav-file wav-file
male wav-file wav-file wav-file wav-file wav-file wav-file wav-file

mail: Felix Burkhardt