A Meta-Analysis of Music Emotion Recognition Studies
How well we can predict emotions in music? What is the evidence in the published literature for explaining what emotions the listeners can perceive in music when the source consists of audio examples. To what degree the results are dependent on the actual models, type of emotions predicted, musical/acoustic features, or musical materials?
To obtain answers to these questions, we have set out to record and analyse the current state of the art from the literature using a meta-analysis paradigm. We focus on Music Emotion Recognition (MER) and hence the acronym metaMER.
The public-facing version of the repository is available at https://tuomaseerola.github.io/metaMER/
Plan
We define the aims and methods in preregistration plan, which has been preregistered at https://osf.io/6y3dr/.
Study Search and Selection
Search databases and criteria are documented in studies/search_syntax.qmd.
Data Extraction and Coding
Data extraction is described in extraction details. See also pass3 comparison.
The data will be parsed to a tabular format using a custom library_parser.qmd.
Analysis
Data analysis steps are covered in the analysis/analysis.qmd document.
Manuscript
The study report is available in manuscript/manuscript.qmd document.
Common datasets
Commonly used datasets are available in manuscript/datasets.qmd.
Feature representations
Summary of the feature categories from Panda et al. (2020) for the studies included in the meta-analysis is available at studies/feature_representation.qmd.