Skip to main content

ORIGINAL RESEARCH article

Front. Educ., 30 August 2024
Sec. Assessment, Testing and Applied Measurement

Exploring the assessment of musical praxis through ICT in the academic context

  • 1Departament de Didàctica de l’Educació Física, Artista i Música, Universitat de València, Valencia, Spain
  • 2Conservatori Superior de Música de Castelló, Castelló de la Plana, Spain
  • 3Departament d’Educació, Universitat Jaume I, Castelló de la Plana, Spain

The evaluation of musical praxis involves a nuanced assessment of performer competencies within the intricate dynamics of musical elements, often hindered by subjective influences and the transient nature of performance. This study investigates the integration of Information and Communication Technology (ICT) tools to enhance instrumental praxis evaluation, focusing on French horn applicants to the University of Valencia Philharmonic Orchestra (OFUV). Employing a descriptive observational methodology and utilizing the MAXQDA application for analysis, the study examines key aspects of interpretation through individual recordings. Results demonstrate that ICT applications facilitate transparent and precise evaluation of performance aspects, underscoring the importance of incorporating these tools in performative education. In this regard, 89% of participants found the feedback to be very useful. Leveraging audio-video recordings offers a promising avenue for comprehensive analysis, providing clearer feedback and advocating for their integration by educational authorities and instructors to foster objective evaluation and enhance musical pedagogy.

1 Introduction

The assessment of instrumental musical praxis is a process that involves the evaluation and analysis of the performer’s competencies and skills. Based on the theories of Aristotle (Sinnott, 2007), Bourdieu (Costa, 2006) or Freire and Ronzoni (2004), musical instrumental praxis can be defined as the reflective and transformative practice of interpreting music, combining theory, technique, and social context to create meaningful artistic expression. It’s not just playing notes, but bringing music to life through deep understanding and creativity. This type of assessment requires weighing the quality of many elements happening simultaneously. Musical interpretation involves subjective elements that complicate fair and objective evaluations, often leading to intense debates and criticism. To address this, assessment rubrics have been developed to consider multiple factors and parameters in musical praxis, aiming to ensure fair, objective, and consistent evaluations that offer valuable feedback to both students and teachers (Capistrán Gracia, 2023).

The study centers on performative music education, focusing specifically on evaluating musical performance for professional musicians. Unlike general music education from early childhood through to age 18 (K-12), which covers a broad range of musical skills, this study focuses on assessing interpretative abilities in live musical contexts. This specialized focus demands detailed evaluation methods to ensure high-quality performances. The primary challenge in assessing musical praxis is its ephemeral nature, making precise evaluations difficult. However, audio-video recordings facilitate a more thorough review, allowing for accurate documentation and feedback.

Currently, the use of audio-video recordings to assess interpretative competencies is commonly employed by institutions, orchestras, and competitions in their selection processes. Despite this, the application of this tool in the assessment of instrumental praxis is not yet widely implemented or included in the educational regulations of music studies, making its use almost exceptional for this purpose (Cotolí and Escorihuela, 2024). Using digital recorders and cameras to observe participants in music performance education significantly enhances the evaluation of interpretive skills and the development of musical abilities. Tools like the Musical Instrument Performance Capture and Analysis Toolbox (MIPCAT) capture physical control variables and body gestures, facilitating detailed and comparative analysis. The application of audiovisual technologies and methods such as visual ethnography and CAQDA provides immediate and effective feedback, improving result communication and reducing subjectivity (Žalys, 2023). Additionally, these technologies support flexible pedagogical methods like flipped classrooms and promote self-assessment, allowing students to better prepare and learn anytime, anywhere (Almeida et al., 2023; Nugroho and Biasutti, 2024).

The advancement of technology provides specialized applications for qualitative analysis, allowing for the evaluation of musical praxis by analyzing audio and video recordings. This need became more evident after the challenges posed by the COVID-19 pandemic to these educational practices (Carabal-Montagud et al., 2021). The drastic shift from predominantly practical and in-person teaching to virtual instruction led to the modification of all teaching methodologies and assessment processes, requiring educators to quickly acquire ICT knowledge (Jordán Enamorado and Ricart Vayá, et al., 2022; Biasutti et al., 2023; Carreira et al., 2023).

In the field of Artificial Intelligence applied to music, the development of standards such as the Music Audio Representation Benchmark for universal Evaluation (MARBLE) is crucial. MARBLE provides a framework for evaluating various music information retrieval tasks through a comprehensive taxonomy and unified protocol. The ability of these tools to fairly and standardly compare representations of pre-trained models on music recordings underscores the importance of ICT in delivering precise and reproducible evaluations, allowing for continuous improvement and innovation in music AI research. Similarly, in the realm of Extended Reality (XR) and New Music Performance (NMP) modalities, evaluating the quality of experience is fundamental. Case studies have shown that immersive audio technologies in XR can significantly enhance the online music-making experience. The results of these evaluations provide valuable insights for the future development of XR-NMP systems, allowing for technical and methodological adjustments based on objective and detailed data (Cairns et al., 2023; Yuan et al., 2023).

Considering that the educational model of music studies is based on face-to-face interaction and follows a traditional approach, it is crucial not to overlook the importance of communication between teachers and students (King et al., 2019). Artistic education in Spain has not been prominent in using digital technologies and resources, resulting in a lack of digital competence among educators (Morant, 2017). The educational community, often rooted in 19th-century methodologies, has faced the challenge of educational innovation (Botella Nicolás and Escorihuela Carbonell, 2018).

The use of ICT for formative assessment proves to be an important tool for improving teaching management in education. It allows for immediate and effective feedback to students to enhance their performance (Moncayo Arias et al., 2023). In this context, improving the communication of assessment results through ICT can include valuable elements for the qualitative analysis of student competence development. This perspective is made possible with the incorporation of ICT and is challenging to achieve with traditional test methods without this instrumental aid (Jornet Meliá et al., 2012).

Currently, the assessment of musical praxis in auditions, exams, job competitions, and other selection processes relies on the decisions of tribunal members. In these processes, feedback is typically not provided to the applicant, and there is no record of audio or video recording of the test for self-evaluation; only the result, whether positive or negative, is communicated. As a result, applicants often feel frustrated when the outcome is not as expected, and they cannot precisely understand the aspects that led to a negative evaluation, causing demotivation and, in some cases, even abandonment of further study (Chacón Solís, 2013).

The problem posed is how to contribute to a clearer, fairer, and more transparent evaluation and feedback process in the selection or grading processes assessing musical praxis. To address this issue, a proposal is outlined, testing the use of audio-video recordings in an entrance exam for a youth orchestra using the MAXQDA application. The main objective of this study is to analyze and evaluate the instrumental praxis of candidates for the OFUV specializing in the horn, based on audio-video recordings, providing feedback through the MAXQDA application. The study assesses the use of ICT in the evaluation process of musical praxis, proposes an assessment rubric for musical praxis, and disseminates the developed project.

2 Materials and methods

This research addresses the application of ICT tools to a real evaluation process of instrumental musical performance, assessing its implementation from different perspectives. The idea is to start from a real process, as similar as possible to what students will face during their interpretative, formative, and professional careers, to transfer this experience to the evaluation process of musical performance in Higher Music Education. The aim of the research is to evaluate the process of assessing musical praxis through audio-video recording and the MAXDA application.

This study exhibits characteristics identifiable with an evaluative research methodology, as it assesses the implementation of ICT tools in a musical praxis evaluation process. However, it also possesses features of an action research methodology, focused on change, where the researcher actively participates in the evaluation process with the aim of providing a more objective, precise, transparent, and formative assessment of musical praxis (Bisquerra Alzina, 2009). Therefore, the methodology used can be classified as ex-post-facto correlational comparative causal, where the sound event, which is given, is not intervened upon, but the technical-interpretative variables are related to the sound results.

This proposal contemplates two types of evaluation: formative and summative. On one hand, technical-interpretative aspects are evaluated summatively, and on the other, formative feedback is provided for technical-interpretative improvement, considering procedural and attitudinal aspects. According to Canabal and Margalef (2017), to achieve solid learning in students, their way of thinking and acting must be transformed by implementing techniques and strategies that motivate students while being aware of the verification process for learning advancement. These feedback strategies and the assessment of musical performance can be synthesized into different aspects: content, procedural, and attitudinal (Riojas Bances, 2022).

In our study, we maintain the ‘Musical’ dimension (technical and interpretative aspects) and the ‘Attitudinal’ dimension, which includes feedback on stage presence and interpretative attitude (Ibáñez Gericke, 2017). According to Riojas Bances (2022), we propose an additional dimension that addresses ‘Procedural’ aspects, related in our case to physiological processes such as body posture, embouchure, and breathing. These aspects are fundamental in the instrumental practice of brass instruments, as highlighted in the specialized literature on teaching these instruments: Farkas (1962), Frederiksen (2006), Nelson (2006), and Steenstrup (2007).

Within this qualitative research paradigm, various ICT tools are used for formative assessment. These include audio and video recording of musical performances, as well as the use of the MAXDA application for data analysis.

This study focuses on a specific case: horn entrance exams for a youth orchestra. It is complemented by three instruments necessary for the implementation of the system and the evaluation of the process: a live performance assessment rubric, an interview with the specialist evaluator, and a survey for the participants.

2.1 Initial phase

The study focused on horn auditions for the OFUV, held on October 25, 2023, at the Charles Darwin Auditorium of the Universitat de València. The selection committee included the OFUV conductor, an invited specialist, and two orchestra members. To provide a more specific evaluation, collaboration was sought from the invited specialist. The OFUV, established in 1995, aims to unify student instrumental groups and provide high-quality orchestral training. Over its 25-year history, it has trained over a thousand young musicians, many of whom are now active in orchestras or teaching roles.

The project incorporated ICT tools that handle personal data, necessitating adherence to ethical and data protection standards. Informed consent forms were created for both participants and collaborator, detailing the study’s objectives, procedures, confidentiality, and data protection, as well as participants’ rights regarding their involvement and data. The rubric proposed to assess musical praxis is based on three categories, with corresponding codes and sub-codes, as shown in Figure 1.

Figure 1
www.frontiersin.org

Figure 1. Rubric categories.

The study utilized a numerical Likert scale (1 to 6) to summatively assess technical-interpretative aspects of performers, aiming to guide their training. The scale corresponded to competency levels: Low, Basic, Medium, High, Very High, and Excellent. The evaluation focused on interpretative excellence, with ratings reflecting a “product-oriented” perspective (Chacón Solís, 2013), assessing final competency levels. For instance, a score of 6 indicated excellent competence, while a score of 1 indicated low competence. In contrast, feedback on passages adopted a “process-oriented” approach, emphasizing the improvement of technical and musical skills.

2.2 Execution phase

Firstly, informed consents were distributed via email to each participant and collaborator. Data recording utilized a ZOOM Q8 digital audio-video recorder. The camera was positioned about two meters from the candidate, allowing observation from the front and left profile to facilitate the analysis of posture, breathing, and embouchure. The sample consisted of 9 participants, who were the applicants for the OFUV examination call and signed the consent form.

For data processing and analysis, the MAXQDA application, specialized in qualitative and quantitative research, was used. This application was employed to code the proposed evaluative dimensions, along with their corresponding categories and subcategories, linking segments of the audio-videos of each candidate. Each categorized segment included a feedback comment that facilitated the subsequent evaluation of the rubric. These coded audio-video segments, along with the comments, were exported and shared with each participant. Thus, when viewing their video, they could read the feedback generated for each segment in an attached document.

The rubric used in this study was provided beforehand to the horn specialist for their assessments and comments. This rubric served as a reference for the analysis conducted through the MAXQDA application. Once the analysis and the final rubric for each candidate were prepared, they were sent to the tribunal specialist for review. Their contributions following the review were incorporated into the analysis and rubric documents. Subsequently, the collaborating specialist was interviewed to assess the degree of satisfaction with the evaluation system employed through ICT.

In Table 1, as an example, we can see the rubric for the technical-interpretative aspects that was sent to each participant, which included the specific evaluation of each subcategory and the arithmetic mean for categories and overall score. The feedback for each category is provided in the comments:

Table 1
www.frontiersin.org

Table 1. Technical-interpretative aspects rubric example.

In the rubric for procedural aspects, no numerical rating was given; instead, a dichotomous choice was provided to facilitate evaluation, along with feedback comments. Thus, in a balanced posture, the line of gravity passes through the axes of all joints, with body segments aligned vertically, maintaining physiological efficiency. A compensated embouchure exerts appropriate muscle tension for the register and dynamics to be played. And, proportionate breathing is that which adapts to the passage being performed, managing and administering air adequately. Therefore, the other choices correspond to their opposite category. Attitudinal aspects were distributed through the comments attached to the videos. The Table 2 shows the rubric sent to each participant.

Table 2
www.frontiersin.org

Table 2. Procedural aspects rubric example.

The distribution of rubric results and the analysis of each test were carried out individually via email. The email included a web link to the rubric of the test, a folder with the complete video along with a document providing comments and timestamps, and an analysis of the different categories. In other folders, were included different aspects analyzed with their corresponding codes. Each code contained fragments of the analyzed passages in audio-video, along with a document containing feedback and timestamps, along with other codes that matched the same analyzed passage. Subsequently, surveys were designed and distributed to the participating horn players, who had previously received the rubric and analysis of their test through the QUALTRICS platform. These surveys were designed to assess the level of satisfaction and conformity with the system implemented in the evaluation of musical praxis through ICT.

3 Results

This section serves as the culmination of rigorous investigation and analysis. It encompasses visualizations and rubric assessments of tests, insights from specialist collaborators, and detailed analyses derived from these assessments. Visualizations aid in interpreting complex data patterns, while rubric assessments provide structured evaluations of qualitative data. Specialist collaborators enrich the research process with specialized knowledge and perspectives. Together, these elements offer a comprehensive understanding of the research outcomes. The section delves into detailed analyses, including interpreting trends and correlations. Through statistical analysis and qualitative coding, researchers elucidate the significance of their findings.

3.1 From the visualization and rubric assessment of the tests

The overall results of the test were analyzed after viewing the videos and applying the rubric, categorizing them by themes, codes, and sub-codes, and providing feedback on relevant aspects for improvement or motivation.

3.1.1 Technical-interpretive aspects

Firstly, the technical-interpretive aspects were analyzed, addressing crucial parameters, directly and indirectly, influencing the quality of musical interpretation, either contributing to improvement or limiting it. These parameters were categorized into the following codes: Tuning, Sound Quality, Technique, and Interpretation.

Regarding tuning, the average rating in this aspect was 3, with a maximum of 4 and a minimum of 2. Comments mainly focused on octave tuning, with twelve specific observations in this sub-code. All participants addressed this aspect to varying extents. Three comments were positive and encouraging, while six highlighted deficiencies in tuning practice. Six comments were recorded on interval tuning and distributed among four candidates. The feedback mostly identified passages with interval dissonance, pointing out possible causes such as lack of air at the end of a phrase and excessive tension resulting in a tight sound.

For sound quality, the average score was 3.05, with a maximum of 4.7 and a minimum of 2.3. Sound quality was divided into three sub-codes: High, Medium, and Low registers. In the high register, sixteen comments were made, three of them praising the quality of this register, while the rest criticized it as tight and forced. Feedback highlighted issues such as excessive muscle tension, obstruction of airflow in the upper respiratory passages (throat), and lack of security and direction in this register. In the medium register, fifteen comments were received overall. Seven applicants praised the good sound quality, while others pointed out passages where the quality was suboptimal, describing it mainly as tight and forced. Feedback highlighted issues such as excessive muscle tension, lack of air, and left-hand position obstructing the bell. In the low register, twelve comments were given to six candidates, with seven praising the good quality of this register, while others provided feedback for improvement, suggesting taking more air, reducing muscle tension, and playing with more freedom and relaxation. Adjectives used to describe low-quality sounds were tight, aggressive, and shaky. The sound quality in pedal tones received three comments for two low horn candidates, emphasizing working on the transition between the low register and pedals to maintain quality in both registers and avoiding the use of a too-closed embouchure in pedals.

Regarding technique, the average score was 2.95, with a maximum of 5 and a minimum of 2. The sub-codes associated with the technique were: Emissions, Articulations, and Coordination. Emissions received eighteen comments from eight candidates. Most referred to a lack of confidence in the initial emissions, with some issues of retention in the first note. Feedback was related to breathing and embouchure preparation before the attack. Only one feedback was positive and motivating. Articulations were divided into different sub-codes: legato, staccato, and swollen notes. Sixteen comments were received, highlighting the lack of definition, excessive accentuation, and lack of body in the notes. Swollen notes were identified in two candidates. In Legato, comments mentioned the appearance of harmonics in the transition between notes, which muddled the passage, in three candidates. In the coordination sub-code, there was a reference to a lack of coordination between articulation, air, and fingering, with twelve comments from eight candidates. Feedback focused on the lack of air continuity and tempo stabilization in most comments.

Finally, regarding interpretation, the average score was 3, with a maximum of 5 and a minimum of 2. The sub-codes associated with interpretation were: Tempo, Dynamics, and Musicality. Tempo received only one comment reflecting a poorly executed measure in a passage. Dynamics were further classified into flat, proportionate, and disproportionate, with nine comments in total. The flat sub-code received three comments from three candidates, evaluating their interpretation as lacking contrast in nuances and phrasing. The proportionate sub-code referred to an appropriate dynamic matching the passage’s character, motivating the candidate. Meanwhile, the disproportionate sub-code indicated an excessive nuance that did not fit the passage, in two performers. Musicality received twenty-one comments from all candidates, mostly referring to a lack of continuity, fluency, and direction in phrases. Also noted was the lack of contrasts in character, phrasing, and dynamics in some passages. We can see a summary of the technical-interpretative results in Figure 2.

Figure 2
www.frontiersin.org

Figure 2. Technical-interpretative aspects [maximum rating > minimum rating/average rating (1–6)].

3.1.2 Procedural aspects

Secondly, procedural aspects refer to the proper implementation of three crucial physical elements for the technical interpretative optimization of wind instrument performers: Body Position, Embouchure, and Breathing. In this section, sub-codes were dichotomously classified, providing necessary comments and feedback for optimizing the body towards technical-interpretative improvement.

On one hand, body position was classified as balanced and unbalanced. The results categorized four candidates with a balanced position. In three of them, excess muscle tension was observed, especially towards the high register. Two tended to lean their heads forward, and one swayed the body excessively, shifting the body’s center of gravity. Only one candidate maintained a balanced position without excess tension observed. The remaining five candidates were classified with an unbalanced position. This type of body position tends to create tension, predominantly in the neck, upper back, and lumbar region. It is noteworthy that, in general, those with a better-balanced body position and less observed tension achieved better technical-interpretative results than those with an unbalanced position where muscular tensions were observed.

On the other hand, embouchure was classified as balanced and unbalanced. Overall, the group of candidates presented an embouchure classified as balanced, but two of them received feedback aimed at facilitating air passage in the high register and pedals, trying not to overly close the opening to allow air passage in these registers.

Lastly, breathing was classified as balanced and unbalanced, although an additional sub-code was recorded in the analysis, dedicated to air management. The results showed passages with unbalanced breathing in seven candidates. The comments and feedback referred to inappropriate breathing for the passage’s character, insufficient inspired air, air retention before the attack, and excessive air pushing, creating excess tension. Regarding air management, passages with inadequate management were found in seven candidates. This affected sound quality and tuning at phrase endings, consequently affecting subsequent passages due to irregular breathing. Only two positive comments were made about a balanced and appropriate implementation of breathing.

3.1.3 Attitudinal aspects

Thirdly, attitudinal aspects were not incorporated into the rubric, as references were made only to three participants. However, they were reflected in the aspects analyzed through the illustration of identified passages with their feedback. These aspects were classified as positive and negative, adding a new sub-code related to the psychology of interpretation. Positive and negative attitudinal aspects refer to attitudes shown during the test and are considered susceptible to improvement or motivation. As for aspects related to the psychology of interpretation, lapses of concentration and insecurity in some analyzed passages were identified.

3.2 From the specialist collaborator

After reviewing the analyses and rubrics of the tests provided to the specialist collaborator, an interview was conducted to exchange opinions and assess the degree of satisfaction with the implementation of ICT in the evaluation of this selection process. It was a directed interview consisting of twelve questions and eleven closed questions with multiple-choice options. Ten of them used a Likert scale with five ordinal data options related to quality and ordered from lower to higher degrees of satisfaction or conformity. The interview covered several key dimensions: the perceived usefulness of feedback for candidates’ preparation and development; the ease of accessing analyzed categories and competencies in the videos; the clarity of comments in video analysis and rubrics; the benefit of being able to review the recorded performance; the contribution of ideas and resources for future evaluations; the additional information provided by the videos compared to direct observation; the sufficiency of the feedback provided; the system’s ability to convey transparency and objectivity; the level of agreement with the use of ICT in evaluation; interest in applying the methodology to other musical evaluation processes; opinions on the application of ICT in the evaluation process; and an overall rating of the proposal. One question used scalar data with numerical ratings from one to ten, ordered in all cases from lower to higher degrees of conformity. Additionally, the rubrics of the tests performed by the specialist collaborator were reviewed, providing comments and feedback given to the test participants. In these dialogues, the specialist collaborator expressed a high degree of satisfaction with the observations and feedback given, also providing detailed clarifications and suggestions on some comments and passages, contributing to a more specialized observation. It is noteworthy that his review was thorough, covering all the analyzed aspects of each of the candidates. It concluded with an open question for the collaborator to voluntarily express his opinion on the implemented process. The main results indicate that, from the specialist’s perspective, the evaluation of the implementation of ICT in the proposed rubric provides very beneficial feedback to the participants for their training and preparation for future tests. It is seen as a tool that allows for a deeper evaluation, which is not possible through direct observation alone. However, the system used for presenting results through folders can be improved. Similarly, although the explanations of the analyzed aspects are sufficient, they could provide some additional clarification for better understanding. The collaborator also opined that these types of evaluations can bring more objectivity and transparency to tests, making their application in other evaluative processes of interpretative practice very interesting. However, he also mentioned, “If the participants are willing to improve and continue learning, it seems very opportune to me. But in my experience, there are people who prefer not to know”.

To conclude, he was asked to evaluate the implementation of this rubric proposal and the musical praxis evaluation process through ICT. His rating was 8 out of 10, indicating that this evaluation proposal could be interesting.

3.3 From the participant questionnaire

The questionnaire administered to the participants consisted of twelve items, eleven closed-ended multiple-choice questions, and one open-ended question. The questionnaire was distributed via an anonymous link to allow participants to provide their assessment freely. Links were individually provided to each participant via email and the WhatsApp application until all completed surveys were received.

In the satisfaction survey on the implementation of ICT in the evaluation process of musical praxis, several key dimensions were assessed through a series of questions. Participants were asked about the usefulness of the provided feedback, the ease of finding the different categories analyzed in the video, and the clarity of the comments received in the video analysis and rubric. Additionally, the survey inquired whether it was beneficial to review the performance afterward, if the evaluation system had offered useful ideas or resources for future assessments, and the level of satisfaction with the comments and analyzed categories. It also asked whether the feedback received was sufficient or insufficient and if the implemented system conveyed greater objectivity and transparency. Overall satisfaction with the musical praxis evaluation system was evaluated, and participants were asked about the potential interest in applying this proposal to other evaluative processes in musical praxis. They were also invited to express their opinion on the implemented evaluation system and to rate the proposal for evaluating musical praxis through ICT.

The analysis of the data recorded in the questionnaires given to the study participants indicates that most performers, 89%, consider the feedback to be very useful, while a small minority, 11%, found it slightly useful (see Figure 3).

Figure 3
www.frontiersin.org

Figure 3. The usefulness of feedback.

Regarding the received comments, the majority, 56%, found them extremely easy to understand, 33% somewhat easy, and 11% neither easy nor difficult to understand. Similarly, most candidates, 67%, believed that the feedback received was sufficient, 11% somewhat sufficient, and 22% thought it was neither sufficient nor insufficient. On the other hand, the majority agreed with the comments received, with 44% fully agreeing and 33% somewhat agreeing, while a minority remained neutral, 11%, and completely disagreed, another 11%. This result, in line with the received comments’ conformity, aligns with the collaborator’s statement that not every candidate is willing to receive feedback, even if constructive. Data related to the organization of the results of the test analysis show that, although the majority believes that its organization by folders was extremely easy to find (44%) and somewhat easy (33%), 11% believed it was neither easy nor difficult, and another 11% found it extremely difficult to find. These data prompt reflection on the organization of analyzed data, encouraging improvements in management. Regarding whether viewing and listening to the test afterward was useful, participants unanimously agreed that it was extremely useful (56%) or very useful (44%).

Similarly, when asked if this evaluation system could help them in preparing for future tests, the results, although with a relative majority, showed that 55% definitely believed so, 22% probably yes, and another 22% remained neutral. These data reflect that this evaluation system may be more helpful than other established systems.

Concerning whether this evaluation system can convey more objectivity and transparency, a large majority of participants believed that it definitely could (67%) and probably could (22%), with a small minority (11%) not entirely sure (see Figure 4).

Figure 4
www.frontiersin.org

Figure 4. Objectivity provided by the evaluation system.

Most participants believed it would be extremely interesting (44%) or very interesting (33%) to apply this proposal to other evaluative processes: exams, competitions, competitions, etc. While a minority remained neutral (11%), and another disagreed entirely (11%) (see Figure 5).

Figure 5
www.frontiersin.org

Figure 5. Evaluation of the application of the system to other evaluative processes.

In the open question, four candidates provided their opinions on the implemented evaluation system. Their comments referred to the originality and innovation of the analysis, expressing satisfaction with the evaluation and the provided assistance. They also mentioned the opportunity this evaluation process offered to self-observe in an audition, enabling self-assessment to identify possible errors and correct them in future tests.

The last question recorded scalar data from 0 to 10 to assess this proposal for the evaluation of musical praxis through ICT. The results showed that 56% of the participants rated this proposal with a 10, 33% with an 8, and 11% rated it with a 5.

Through these results, it can be deduced that, predominantly, the participants highly value this proposal, with a minority (11%) not appreciating it as much. As the specialist collaborator mentioned, not all participants in musical performance execution processes are willing to receive feedback and learn from them.

4 Discussion

In conclusion, the study comprehensively addresses the evaluation of instrumental musical praxis, highlighting the inherent complexity in assessing musical interpretation due to its ephemeral and subjective nature. It emphasizes the need to implement evaluation rubrics that consider various factors and parameters to achieve fair and objective evaluations, providing useful feedback to both students and teachers.

The study presents a methodology that leverages audio and video recordings to assess instrumental performance, using the MAXQDA tool for detailed analysis and categorization of results. It evaluates technical-interpretive, procedural, and attitudinal aspects of horn candidates for the OFUV, offering a comprehensive overview of their performance. The methodology is designed to support objective evaluations in examinations, although it requires ongoing assessment and specialist validation to ensure continuous improvement.

Collaboration with a specialist reinforces the utility of the proposal, highlighting the beneficial feedback for participants and the possibility of obtaining a deeper evaluation than direct observation. The questionnaire to participants reveals that the majority perceives the feedback as useful and believes that this evaluation approach could improve preparation for future auditions. Although some participants show resistance to receiving feedback, most value the implementation of this methodology positively.

In the end, the study offers a comprehensive view of the evaluation of musical praxis, proposing an innovative methodology. However, areas for improvement in presenting results and organizing information are highlighted, suggesting possible adjustments in the practical implementation of the proposal.

Finally, this study suggests the need to expand teacher training in ICT tools that facilitate the processing of audio-video data for subsequent evaluation and feedback on musical praxis. Currently, there are applications and software that offer these features, such as MAXQDA, Atlas.ti, and Ellan. Public institutions should promote their use by facilitating training and access for teachers of artistic education. However, their use should first be regulated, taking into account the ethical considerations and data protection issues that recording and processing data in this format may entail.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.

Ethics statement

The studies involving humans were approved by ethics committee of Universitat Jaume I (https://www.uji.es/investigacio/base/etica/comites/)—the research is part of a doctoral study of the university. In addition, written informed consent was obtained from the participants, and can be consulted in https://drive.google.com/drive/u/3/folders/1LRMJ15bh3IANogaDk_aU2y3SNxY61qUK. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation in this study was provided by the participants' legal guardians/next of kin. Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.

Author contributions

GE: Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Visualization, Writing – original draft, Writing – review & editing. JC: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Software, Writing – original draft. AB: Resources, Supervision, Validation, Visualization, Writing – review & editing. AP: Supervision, Validation, Visualization, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was funded by Conselleria d’Educació, Universitats i Ocupació, grant number resolució de 29 de novembre de 2023, de la directora de l’Institut Superior d’Ensenyances Artístiques de la Comunitat Valenciana (ISEACV), de la convocatòria d’assignacions econòmiques per a la realització de projectes d’investigació acadèmica en els centres superiors d’ensenyaments artístics de la Comunitat Valenciana, per a desenvolupar durant el curs acadèmic 2023–2024 in the project Proposta de rúbrica per avaluar la praxis musical a través de les TIC (Conservatori Superior de Música de Castelló).

Acknowledgments

We would like to express our sincere gratitude to the Aula de Música of Universitat de València and the Área d’Activitats Musicals of Fundació General Universitat de València for their invaluable support. In this research, the iMUSED research group (Investigating Music Education GIUV2020-483) of the Universitat de València participated. We thank the Universitat Jaume I of Castelló for the licenses of the software used.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Almeida, A., Li, W., Schubert, E., Smith, J., and Wolfe, J. (2023). Recording and analysing physical control variables used in clarinet playing: a musical instrument performance capture and analysis toolbox (MIPCAT). Front. Signal Proc. 3:1089366. doi: 10.3389/frsip.2023.1089366

Crossref Full Text | Google Scholar

Biasutti, M., Philippe, R. A., and Schiavio, A. (2023). E-learning during the COVID-19 lockdown: an interview study with primary school music teachers in Italy. Int. J. Music. Educ. 41, 256–270. doi: 10.1177/02557614221107190

PubMed Abstract | Crossref Full Text | Google Scholar

Bisquerra Alzina, R. (2009). Metodología de La Investigación Educativa. Madrid: La Muralla.

Google Scholar

Cairns, P., Hunt, A., Johnston, D., Cooper, J., Lee, B., Daffern, H., et al. (2023). Evaluation of Metaverse music performance with BBC Maida Vale recording studios. AES 71, 313–325. doi: 10.17743/jaes.2022.0086

Crossref Full Text | Google Scholar

Canabal, C., and Margalef, L. (2017). The feedback: a key to learning-oriented assessment. Profesorado, Revista De Currículum Y Formación Del Profesorado 21, 149–170.

Google Scholar

Capistrán Gracia, R. W. (2023). Las rúbricas como instrumento de evaluación del desempeño musical en una universidad de México. Praxis Investigativa Redie 15, 44–60.

Google Scholar

Carabal-Montagud, M. Á., Escorihuela-Carbonell, G., Santamarina-Campos, V., and Pérez-Catalá, J. (2021). “The impact of the Covid-19 pandemic on the music societies of Valencian community, Spain” in Music as intangible cultural heritage. Economic, cultural and social identity. eds. B. De-Miguel-Molina, V. Santamarina-Campos, M. De-Miguel-Molina, and R. Boix-Doménech (New York City: Springer International Publishing), 119–137.

Google Scholar

Carreira, F., Appugliese, G. A., Barboza, R. B., Santiago, I. C., and Monzoni, M. (2023). Agora Tudo On-Line: Análise Da Migração de Uma Disciplina Baseada Na Aprendizagem Transformadora. Revista Brasileira de Casos de Ensino Em Administração 13, 1–13. doi: 10.12660/gvcasosv13nespeciala2

Crossref Full Text | Google Scholar

Chacón Solís, L. A. (2013). ¿Qué significa “evaluar” en música? Revista Electrónica Complutense de Investigación en Educación Musical - RECIEM. 9, 1–5. doi: 10.5209/rev_RECI.2012.v9.42805

Crossref Full Text | Google Scholar

Costa, R. L. (2006). The logic of practices in Pierre Bourdieu. Curr. Sociol. 54, 873–895. doi: 10.1177/0011392106068456

Crossref Full Text | Google Scholar

Cotolí, E., and Escorihuela, G. (2024). “Aproximación a La Evaluación de La Praxis Musical a Través de Las TIC” in Educacion y Formacion Musical Transformacion Social, Empleabilidad y ODS. ed. A. M. V. Carrasco (Madrid: Dykinson S L), 71–84.

Google Scholar

Farkas, P. (1962). The art of Brass playing. New York: Wind Music, Inc.

Google Scholar

Frederiksen, B. (2006) in Arnold Jacobs: song and wind. ed. J. Taylor . 6th ed. (Gurnee: Wind Song Press Limited).

Google Scholar

Freire, P., and Ronzoni, L. (2004). La Educación Como Práctica de La Libertad. Buenos Aires: Siglo Veintiuno Editores Argentina.

Google Scholar

Ibáñez Gericke, T. (2017). Aprendizaje, Experiencias Previas y Criterios de Evaluación En La Formación Musical Superior. Rev. Music. Chil. 71, 79–107. Available at: http://www.uchile.cl/portal/

Google Scholar

Jordán Enamorado, J., and Ricart Vayá, A. (2022). Adaptarse al nuevo contexto educativo: hacia un modelo teórico-práctico para la docencia online. Revista Complutense de Educación 33, 635–643. doi: 10.5209/rced.76384

Crossref Full Text | Google Scholar

Jornet Meliá, J. M., González-Such, J., and García-Bellido, M. R. (2012). La investigación evaluativa y las tecnologías de la información y la comunicación (TIC). Revista Española de Pedagogía 70, 93–110.

Google Scholar

King, A., Prior, H., and Waddington-Jones, C. (2019). Exploring teachers’ and pupils’ behaviour in online and face-to-face instrumental lessons. Music. Educ. Res. 21, 197–209. doi: 10.1080/14613808.2019.1585791

Crossref Full Text | Google Scholar

Moncayo Arias, M. A., Bastidas Vera, E. A., Cabezas Macias, P. M., Ledesma Espín, C., del Roció, C., del, R., et al. (2023). Aplicación de TICs en la evaluación formativa mejora la gestión docente en educación básica. J. Sci. Res. 8, 1–16.

Google Scholar

Morant, R. (2017). “Cambiar el paso: las escuelas de música de las sociedades musicales en el s. XXI” in La mecánica de la creación sonora. eds. A. Murillo and M. Díaz (València: Institut de Creativitat i Innovacions Educatives de la Universitat de València), 167–181.

Google Scholar

Nelson, B. (2006). Also Sprach Arnold Jacobs. Baarn: Polymia Press.

Google Scholar

Nicolás, A. M., and Carbonell, G. E. (2018). Educación en artes: Un enfoque sobre la enseñanza performativa de la música. 1, 78–93.

Google Scholar

Nugroho, M., and Biasutti, M. (2024). Instructors’ perspectives on teaching music online after the covid-19 lockdown: a qualitative analysis in Indonesia. J. Univ. Teach. Learn. Pract. 21, 1–26. doi: 10.53761/3dqqzp79

Crossref Full Text | Google Scholar

Riojas Bances, J. L. (2022). Retroalimentación evaluativa en la interpretación musical. 14, 66–73. doi: 10.26495/tzh.v14i2.2285

Crossref Full Text | Google Scholar

Sinnott, E. (Ed.) (2007). Ética Nicomaquea. Buenos Aires: Ediciones Colihue SRL.

Google Scholar

Steenstrup, K. (2007). Teaching Brass. Aarhus: Det Jyske Musikkonservatorium.

Google Scholar

Yuan, Ruibin, Ma, Yinghao, Li, Yizhi, Zhang, Ge, Chen, Xingran, Yin, Hanzhi, et al. (2023). “MARBLE: music audio representation benchmark for universal evaluation.” In advances in neural information processing systems, edited by a oh, T Naumann, a Globerson, K Saenko, M Hardt, and S Levine, 36:39626–47. Curran Associates, Inc. Available at: https://proceedings.neurips.cc/paper_files/paper/2023/file/7cbeec46f979618beafb4f46d8f39f36-Paper-Datasets_and_Benchmarks.pdf

Google Scholar

Žalys, V. (2023). “Computer-assisted qualitative data Analisys in assessment of music Trainig process CAQDA” in Arts and research in education: opening perspectives. eds. J. O. Segarra and F. H. Hernández (University of Girona), 71–79. Available at: https://eera-ecer.de/networks/29-research-on-arts-education/

Google Scholar

Keywords: musical praxis assessment, education competencies, music evaluation, ICT tools, performative education

Citation: Escorihuela G, Cotolí JE, Botella AM and Porta A (2024) Exploring the assessment of musical praxis through ICT in the academic context. Front. Educ. 9:1438721. doi: 10.3389/feduc.2024.1438721

Received: 26 May 2024; Accepted: 20 August 2024;
Published: 30 August 2024.

Edited by:

Oscar Casanova, University of Zaragoza, Spain

Reviewed by:

Riyan Hidayatullah, Lampung University, Indonesia
Qiran Wang, Zhejiang International Studies University, China
Rong Yang, Sibelius Academy, Finland, in collaboration with reviewer QW

Copyright © 2024 Escorihuela, Cotolí, Botella and Porta. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Guillem Escorihuela, guillem.escorihuela@uv.es

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.