The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
library(stringr)library(knitr)
Read data
# read in bibtex library as data framebib_df_merged <- bib2df::bib2df('metaMER_library_third_pass_clean.bib')
Some BibTeX entries may have been dropped.
The result could be malformed.
Review the .bib file and make sure every single entry starts
with a '@'.
Compare
# check dimensions are accuratedim(bib_df_merged)
[1] 96 47
# distinguish notes with author initialsnames(bib_df_merged)[names(bib_df_merged) =='NOTES'] <-'NOTES.CA'names(bib_df_merged)[names(bib_df_merged) =='NOTES.1'] <-'NOTES.TE'# extract decisions less commentscapture_group <-'include|exclude|unsure'# create new index to track entriesbib_df_merged$NOTES_INDEX.CA<-NA# create new column tracking decisionsbib_df_merged$NOTES_INDEX.CA <-str_extract(tolower(bib_df_merged$NOTES.CA), capture_group)bib_df_merged$NOTES_INDEX.TE <-str_extract(tolower(bib_df_merged$NOTES.TE), capture_group)# check entries are consistentsum(is.na(bib_df_merged$NOTES_INDEX.CA))
[1] 0
sum(is.na(bib_df_merged$NOTES_INDEX.TE))
[1] 0
Report annotation reliability/agreement
# compare raters' decisions with confusion matrixt<-table(bib_df_merged$NOTES_INDEX.CA, bib_df_merged$NOTES_INDEX.TE)# get agreementt2<-round(t/sum(t),2)ag_before <-sum(diag(t2))# make tableknitr::kable(t, caption =paste('Votes before discussion. \n Rows: CA votes; cols: TE votes, Agreement = ', ag_before) )
# update TE decisionbib_df_merged$NOTES_INDEX.TE[bib_df_merged$BIBTEXKEY =='tang2023ap'] <-'exclude'bib_df_merged$NOTES_INDEX.TE[bib_df_merged$BIBTEXKEY =='xing2015em'] <-'exclude'# update TE and CA decisionsbib_df_merged$NOTES_INDEX.TE[bib_df_merged$BIBTEXKEY =='wang2022mu'] <-'exclude'bib_df_merged$NOTES_INDEX.CA[bib_df_merged$BIBTEXKEY =='wang2022mu'] <-'exclude'
Summary
Study
CA
TE
tang2023ap
uses image features
No relevant features, lack of stimulus detail
Exclude
xing2015em
includes classification task
No N of musical excerpts, missing information regarding data processing
Exclude
wang2022mu
only reports DET and equal error rate
Although they report correlation coefficients, emotions are not valence or arousal. Do report MSE of classification task
Exclude
Tabulate results after resolving unsures
# compare raters' decisions with confusion matrixt<-table(bib_df_merged$NOTES_INDEX.CA, bib_df_merged$NOTES_INDEX.TE)# get agreementt2<-round(t/sum(t),2)ag_before <-sum(diag(t2))# make tableknitr::kable(t, caption =paste('Votes after resolving unsure discrepancies. \n Rows: CA votes; cols: TE votes, Agreement = ', ag_before) )
Table: Votes after resolving unsure discrepancies.
Exclude: Data set not sufficiently detailed for MER task
Update table after resolving disagreements
# compare raters' decisions with confusion matrixt<-table(bib_df_merged$NOTES_INDEX.CA, bib_df_merged$NOTES_INDEX.TE)# get agreementt2<-round(t/sum(t),2)ag_before <-sum(diag(t2))# make tableknitr::kable(t, caption =paste('Votes after resolving include vs. exclude discrepancies. \n Rows: CA votes; cols: TE votes, Agreement = ', ag_before) )
Table: Votes after resolving include vs. exclude discrepancies.