top of page

Music Information Retrieval

The general purpose of Music Information Retrieval (MIR) includes the extraction and aggregation of information from musical audio data. As a field, MIR draws its methods from a diverse set of disciplines, including psychology, computer science, machine learning and music theory, with the goal to help humans on tasks ranging from low level feature extraction over the detection of pitch, onset time, duration and source in a musical signal.

​

Research and development in MIR and its subfields (e.g. automatic music transcription, audio content analysis or music generation) addresses the need for intelligent processing with automated methods, on the never ending accumulation of audio and music data on the Internet and the related company databases. Extracting this musical and perceptual information from audio data helps humans and machines alike to interact with music in innovative new ways. 

Example of an automated transcription system of a 10 second audio clip, Semitone spectrum [top], Estimated reference transcription [bottom]

With our interdisciplinary team of scientist we are able to offer a broad range of MIR related tasks and by doing so we employ the full process of exploration, analysis, visualization and model building on your data. We provide the knowledge on cutting edge signal processing, machine learning and artificial intelligence methods to enabling your product to incorporate applications like automatic music transcription, track separation and instrument identification, automatic categorization or recommender systems.

Publications (selection)

Multi-track crosstalk reduction using spectral subtraction.

Seipel, F., & Lerch, A.

Audio Engineering Society Convention 144. Audio Engineering Society, 2018.

Greb, F., Steffens, J., & Schlotz, W.

Music & Science, 1(2), 205920431875595, 2018

Modeling Music-Selection Behavior in Everyday Life: A Multilevel Statistical Learning Approach and Mediation Analysis of Experience Sampling Data.

Greb, F., Steffens, J., & Schlotz, W.

Frontiers in Psychology, 10, 390. 2019

Smartphone-Assessed Movement Predicts Music Properties

Irrgang, M., Steffens, J., & Egermann, H.

MOCO: 5th International Conference on Movement and Computing, Genoa, Italy. 2018

Steffens, J., Lepa, S., Herzog, M., Schönrock, A., Peeters, G., & Egermann, H.

Extended Abstracts for the Late-Breaking Demo Session of the 18th ISMIR Conference 2017.

bottom of page