Music Information Retrieval CISUC



  • Bi-modal (audio and lyrics) emotion dataset (Russell's model) (2016)


    We created a new bi-modal (audio and lyrics) emotion dataset. It contains 133 audio clips and lyrics, manually annotated in 4 quadrants, according to Russell's model.


    If you use it, please cite the following article:

    PDFMalheiro R., Panda R., Gomes P. & Paiva R. P. (2016). “Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset”. 9th International Workshop on Music and Machine Learning – MML'2016 – in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases – ECML/PKDD 2016, Riva del Garda, Italy.


  • Lyrics emotion sentences dataset (Russell's model) (2016)


    We created a new sentence-based lyrics emotion dataset, for Lyrics Emotion Variation Detection research. It contains a total of 368 sentences manually annotated in 4 quadrants (based on Russell's model). The datset was split into a 129-sentences training dataset and a 239-sentences testing dataset.


    If you use it, please cite the following article:

    PDFMalheiro R., Oliveira H. G., Gomes P. & Paiva R. P. (2016). “Keyword-Based Approach for Lyrics Emotion Variation Detection”. 8th International Conference on Knowledge Discovery and Information Retrieval – KDIR'2016, Porto, Portugal.


  • Lyrics emotion dataset (Russell's model) (2016)


    We created a new lyrics emotion dataset. It contains two parts: i) a 180-lyrics dataset manually annotated with arousal and valence values (based on Russell's model); ii) a 771-lyrics dataset annotated in 4 quadrants (Russell's model), based on AllMusic tags.


    If you use it, please cite the following article:

    PDF Malheiro R., Panda P., Gomes P. & Paiva R. P. (2016). “Emotionally-Relevant Features for Classification and Regression of Music Lyrics”. IEEE Transactions on Affective Computing (accepted for publication).


  • Multi-modal MIREX-like emotion dataset (2013)


    We created a new multi-modal MIREX-like emotion dataset. It contains 903 audio clips (30-sec), 764 lyrics and 193 midis. To the best of our knowledge, this is the first emotion dataset containing those 3 sources (audio, lyrics and MIDI).


    If you use it, please cite the following article:

    PDF Panda R., Malheiro R., Rocha B., Oliveira A. & Paiva R. P. (2013). “Multi-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysis”. 10th International Symposium on Computer Music Multidisciplinary Research – CMMR'2013, Marseille, France.



  • MOODetector Reloaded (2016)

    Our MOODetector prototype is being updated with a new user interface and, most importantly, a more accurate regression model. We will make the sofware public as soon as it is stable.


  • MOODetector (2011)

    We created a prototype application for Windows (XP and above)

    • Updates:
      • March 1, 2012: Mood tracking bugs corrected

    Note: Having problems running the application? Install the Visual Studio Redistributable Package.


    This is a prototype application, aiming to work as a proof-of-concept for Music Emotion Recognition and Music Emotion Variation Detection. Naturally, it has several limitations, particularly regarding valence prediction accuracy. It is built on top of Marsyas, which, despite its many virtues, seems to be lacking features that might be more relevant for valence prediction. In fact, the MIR Toolbox showed increased accuracy (see our publications for more information). Therefore, in the future we plan to extend the prototype with more relevant features, both from the MIR toolbox and others resulting from our research.

    For a detailed description of the MOODetector application prototype, please check this article:

    PDFCardoso L., Panda R and Paiva R. P. (2011). “MOODetector: A Prototype Software Tool for  Mood-based Playlist Generation”. Simpósio de Informática – INForum 2011, Coimbra, Portugal.


    MOODetector application



Our personal evaluation database, as well as the ISMIR 2004 database, can be downloaded here.

For more information, please go to the Projects page. Further material will be added later.

Ilustration of the determination of musical notes