Skip to main content

ORIGINAL RESEARCH article

Front. Educ., 08 January 2025
Sec. Assessment, Testing and Applied Measurement
This article is part of the Research Topic Trends in the digitization of education: approaches, innovations and scenarios View all 7 articles

Innovative assessment based on multimedia proofs and social network sharing for introductory engineering courses

Antonio CorrecherAntonio CorrecherCarlos Ricolfe-Viala
Carlos Ricolfe-Viala*Eugenio IvorraEugenio IvorraFernando CorderoFernando CorderoCarlos BlanesCarlos BlanesJaime Martínez-TurganoJaime Martínez-Turégano
  • Department of Systems Engineering and Control, Universitat Politècnica de València, Valencia, Spain

Training and education quality are crucial worldwide even more in the area of technical vocation. The assessment validates the performance of the training and education process, however, the design of assessments that validate the acquisition of certain knowledge, abilities or competences is still unsolved and remains open for research. In this paper, the advance in new technologies is used in the assessment process in a step ahead. Since social media is a big part of the daily life for lots of students, an assessment method is proposed that uses social media as the learning motivation. In consequence, the proposed assessment method is completely aligned with the student way of life and this fact motivates the student learning process since generating new high impact social media contents is a very challenging task. The paper shows the results of an assessment innovation project in which the authors develop a methodology for evaluating laboratory practices that measures knowledge of introductory concepts in automatic control engineering courses. The innovation aims to actively involve students in the generation of social media resources through modern technologies that are attractive to them. Under this methodology, students are tasked with creating a series of short videos explaining how to solve a specific problem or elucidate a concept covered in theory lessons. These videos are shared with their peers via a social network. Lecturers evaluate videos considering the quality in the explanation of technical concepts and impact in the social network. Results show that the proposed assessment methodology increases student motivation compared to the traditional assessment process and increases marks.

1 Introduction

To guarantee competences acquired by graduates, a reliable assessment process is essential. The assessment process is an indicator of the success of the learning outcomes. The success of an assessment depends on several issues. First, the learning outcomes should be defined accurately as knowledge of concepts, abilities or competences acquired with the concepts related with some subject. Second, the assessment should be adapted to the learning outcomes using existing instruments and methods to offer a reliable evaluation. Moreover, students should know the benefits of the assessment performed with a predefined method and the assessment should be aligned with the tendences and directions established as references of the learning process in an educational center. It is necessary to evaluate from the knowledge like the behavior, considering that the learning outcomes are defined as knowledge, abilities or competences (Lile and Bran, 2014).

Factors such as personality, intelligence, social context and learning charge are strongly and steadily connected with academic performance. However, variables such as sex, nationality and age do not influence the learning outcomes significantly (de Koning et al., 2012). The information literacy competence is considered as necessary to reach learning outcomes and for the personal life and work according with (Saunders, 2012). Unfortunately, recent studies prove that students lack the capacity of finding, accessing, evaluating and using the information in an efficient and ethical way (Lile and Bran, 2014).

Analyzing methods of assessing the learning outcomes, (Douglass et al., 2012) differentiates between those who support the use of surveys based on self-reported measures and those who prefer more objective measures such as test-based measurement. Not all assessment methods are able to measure all learning outcomes. Depending on the chosen method the measure reliability changes. Evaluation of learning outcomes requires a development and implementation of reliable and valid performance assessments that are both time consuming and challenging (Ruiz-Primo and Shavelson, 1996). For example, to demonstrate the level of competences, a multiple-choice questionnaire is not the best choice compared with a test that measures discipline-specific skills (Zlatkin-Troitschanskaia et al., 2015). The close-ended responses limit measurements to narrow competences such as recognizing discrete facts and deploying individual process skills to answer questions, which possibly differ from true competences that are evaluated (Garden, 1999; National Research Council, 2001). Tests with open-ended questions allows measuring higher-order skills such as analytic reasoning and evaluation, problem-solving and written communication (Wolf et al., 2015). These kinds of tests can be adapted to evaluate both generic versus discipline-specific skills (Tremblay, 2013). Using rubrics standardizes the student assessment criteria and facilitates their learning (Timmerman et al., 2010). The most important thing in the teaching and learning process is to give feedback to the students on the quality of their works with assessments (Sato et al., 2008). Analyzing the assessment form, traditional oral or written exams or evaluation of performance in class are clearly at a disadvantage in front of alternative evaluation methods with modern technologies. Information and communication technologies allow both teaching and evaluating learning outcomes in a different manner. Considering all these premises, the assessment is present in the learning process and it influences students’ careers notably.

Technical education demands educational centers to strengthen practical skills training through competency-based exercises (Mtshali et al., 2021). In consequence, technical education has associated extra costs such as building new or renovating existing workshops, the maintenance of the equipment and the cost of new equipment and consumable instructional materials. The aim is to increase the technical competitiveness of society in this very competitive world (Ubong and Oguzor, 2007). On one hand, lecturers should adopt practical skills and the application of scientific principles in their lessons, to inference practical skills competences in the students that will be useful in their future work. On the other hand, practical skills learning depends on the development of the practical workshops considering the method, when it takes place, the assessment planning and the assessment implementation (Lumadi, 2013). To improve the skills learning outcomes, it is necessary to dedicate more time to teaching and to evaluate how students handle and use the equipment to acquire skills in practical workshops (Eisenkraft, 2013).

It is assumed that the assessment is the valid tool to validate the competence of practical skills. However, how the assessment validates the competence of a practical skill is still unsolved and remains open for research (Richardson and Blair, 2015). Although assessment is an important part of the learning process of practical skills, discussions and efforts are dedicated to improve infrastructure of workshops, and little attention is given to strength the assessment of practical skills in technical topics. It is very important to innovate in new techniques that guarantee a reliable assessment of learning outcomes of practical skills, considering that offering technical education is much more expensive than non-technical education.

Learning media is crucial in the effectiveness and efficiency to reach the learning goals (Lubis et al., 2023) divides learning media into visual media, audio media, audio-visual media, and multimedia. According to this classification, multimedia involves the students’ senses the most in its application because verbal and pictorial information is presented simultaneously (Richter et al., 2016; Mayer et al., 1996). Using interactive media in learning activities with text, picture, video, audio, and animation boost student’s motivation because of their interest in multimedia systems. Using interactive multimedia, students are attracted to learn because of its interesting display and its support for learning activity. According to Kassim et al. (2014) multimedia plays a role for students to generate flexible and original ideas. Interactive multimedia makes the learning atmosphere happier without pressure. New technologies, modern tools and equipment and on-line resources facilitate and make the learning process more effective and flexible (Raja and Nagasubramani, 2018; Vinales, 2015). They have considerably changed traditional relations between teachers and students, and the way students learn and teachers teach (Manzoor, 2016).

For several decades, many studies present how multimedia helps in the learning process, especially after the 1990s (Mayer, 2017; Wang et al., 2017; Weng et al., 2018; Li et al., 2019). On the other side, studies that use multimedia as an assessment method are focused on how to take the advantages of the computer-based assessments for capturing learning outcomes. However, relatively little is understood about how to leverage such potential (Kuo et al., 2015). Some computer-based assessments rely on multiple choice and short answer questions that are common in paper-based testing (Quellmalz and Pellegrino, 2009). Other studies use the capacities of computer technology to assess the learning outcomes employing multimedia resources that allow active interaction [(e.g., Bennett et al., 2010; Gobert et al., 2013; Quellmalz et al., 2012)]. It is true that multimedia assessments allow to create immersive environments similar to the real life that can provide more meaningful information about student skills (Clarke-Midura and Dede, 2010; Ketelhut et al., 2007), however, in scientific literature, multimedia and even immersive virtual reality possibilities are being discussed in relation to assessment (Susi et al., 2007; Sliney and Murphy, 2011; Clarke-Midura and Dede, 2010). Technology progress allows the design and development of innovative and interactive technology-based assessments but it must always be in accordance with psychometrics.

In this paper, the advances in new technologies are used in the assessment process in a step ahead. Taking advantage of the impact that new technologies currently have in our lives by implanting several social media networks, an assessment method is proposed that uses social media as the learning motivation. Social media are all internet sites or apps that allow users to share contents and to respond to other contents including text, pictures, videos and reactions. Fortunately, or unfortunately social media is a big part of the daily life for lots of students (Popat and Tarrant, 2023; Valkenburg et al., 2022; Coyne et al., 2020). In consequence, the proposed assessment method is aligned with the student way of life completely and this fact motivates the student learning process since generating new high impact contents is a very challenging task.

Paper follows with section 2 that describes the starting point that motivates this work, the university’s resources, the project development process, and how the data evidence is collected. Section 3 shows the project results, and section 4 discusses the main findings.

2 Materials and methods

2.1 Situation of the problem

The subject of Basics of Automatic Control (BAC) is taught in semester B of the second year of the Degree in Electronic and Control Engineering at Universitat Politècnica de València (UPV), Spain, and is a core subject. It has seven laboratory practice groups (with seven lecturers), integrated into three theory groups (with another three lecturers), and handles between 170 and 175 students each year. The course requires students to progressively assimilate the concepts to understand abstractly the design of automatic controllers for any type of system and all the practical implications that this design entails. This process requires students to constantly work on the concepts of the subject and integrate them with those already learned.

The practical part of the course works on both, simulation software and real systems, and students put the concepts acquired in the theory lessons into practice. The laboratory practice session is every week. A script is provided for each practice where students solve a series of exercises. Concepts acquired in previous sessions are not included in the current script since it is assumed that students know them. This fact represents a difficulty since students have to remember previous work and this is not always successful.

At the end of each laboratory practice session, each student solves a test-type exam on a multimedia platform related to the session’s work. This assessment method presents a deficiency because any small error, such as a minor mistake in the simulation software or a slight deviation from the provided script, leads the platform to give very low marks, even if the student has understood and worked on the session. This fact has two consequences. First, it provokes a student frustration since the mark does not correspond to the student effort and second, the teachers must correct the final marks for the laboratory practice to adapt them to the level shown by the students. This mark correction is done at the end of the course, losing the continuous feedback that the students should have.

A verbal survey with students in the 2021/2022 academic year indicated that most students considered the assessment method demotivating for learning the subject. Moreover, more than 65% of the marks had to be increased by more than 10% to reflect the level of the students, in line with the results of previous years. In the same survey, most students found it challenging to remember what they had learned in the previous sessions. In consequence, the development of the practical sessions becomes more complicated as the subject evolves and the learning process degenerates.

At this point, two difficulties are detected in the practical sessions of BAC. Students cannot remember previous concepts easily and marks do not correspond with the student effort in the practical session. These two problems are intended to be solved by introducing a novel assessment method that motivates students and improves the learning outcomes. The proposed approach is to generate multimedia content to be shared through a social network, such as a dedicated course group on a popular social media platform, where students can share their practical work and learn from each other’s experiences. Fortunately, social media motivates students to create content and three consequences could be accomplished. First, since the student is motivated to create high impact contents for the social network, concepts are hardly worked to create a technically interesting video. Second, revision of previous work in future practical sessions is easy by just watching a short video of previous sessions. Third, marks are according to the student’s effort since the lecturer can evaluate the results just by watching a video. A fourth consequence can be supposed but it is not evident. Student-generated resources can be reused as learning resources for future courses. This approach encourages students to create and share their learning materials, such as video tutorials, practice exercises, or even simulation models. As seen in Arruabarrena et al. (2019), this topic is not self-evident and requires in-depth study, although the benefits for both the students who generate the content and those who use it make it worthwhile to work on this concept.

2.2 University portal and traditional assessment

The UPV provides some valuable tools for assessment and communication with students. The UPV portal is called PoliformaT based on the open-source Sakai Learning Management System. It includes all the information regarding teaching materials, timetables, assessments, or exams and allows the creation of tasks and evaluation exercises. In particular, two features have been used in this project. On the one hand, the classical assessment process is done with the exam tool. Teachers can create several exercises with numerical responses (among others), so the students have to fill in a gap with the correct response. The exam tool includes a tolerance range for the response. The tool automatically assesses the exercise, so teachers do not need to invest time in correction tasks.

On the other hand, the task tool allows teachers to create interactive tasks with the students. Teachers define the task, the scheduling, and the format of the files to be uploaded. Students upload the files when they are done, and the teacher returns their marks and comments.

The traditional evaluation of laboratory practices consists of a two-question test developed with the exam tool. Each question is a problem related to the content of the activity in the practice class. The final mark can be 0, 5, or 10 (on a 0 to 10 grading scale), and the process to achieve the answer cannot be assessed. As it was said before, students feel discouraged when the mark does not reflect the knowledge they learned in the laboratory practical session.

The new methodology uses the task tool in PoliformaT. The task assigned to each student pair consists of creating a video explaining how to solve a problem related to the laboratory practice. Therefore, not only the final value is essential, but the entire process, including the conceptual aspects that apply to the specific problem. In order to gain more student involvement, the best videos are shared on social networks, and the post’s success is included in the final mark.

This new approach takes into account the two distinct types of knowledge called procedural knowledge and declarative knowledge, that are important in understanding how individuals acquire and apply information in the context of cognitive science. On one hand, the declarative knowledge refers to concepts that are verbally expressed taking into account the semantic and episodic knowledge. With the proposed assessment method, students share the experience of the laboratory practice as an episode of their lives and the scientific principles that are learned. Moreover, vocabulary and general information is taken into account. On the other hand, procedural knowledge involves skills and processes used to perform task. Procedural knowledge is often acquired through practice and is typically less verbal than declarative knowledge. It includes the ability to execute activities and solve problems. In case of the proposed assessment method students demonstrate acquired skills via the video presentation. This approach not only assesses content knowledge but also enhances collaborative skills with the time spent in making videos.

2.3 Project development process

The working team, in collaboration with the students, deliberated at the beginning of each academic year to determine the social network where the videos would be shared. In the first year, it was decided that it would be TikTok due to its proximity to the students. An account was created to share the videos (@automatica_basica_upv). However, after the first year, according to the student feedback it was concluded that TikTok was not suitable for this type of content. The second year, the working team decided to use YouTube as the social network, and an account was created to share the videos (@aub_disa). As for which practices were to be assessed, four of the eight practices in the course were selected to test the new assessment method. For each one, problems were developed for the students to solve, and the results were captured in videos.

In parallel, approval of the project by the UPV ethics committee was requested, so that if this requirement were necessary for a subsequent publication, it would already have been fulfilled. This validation included the documents required from the students in order to participate in the project and informing them of their rights when participating in the project: Informed consent, commitment to confidentiality, undertaking not to use the data for other purposes, and transfer of image rights.

With all the documentation prepared in the first year, students participated voluntarily in the project according to the documents generated in the validation of the UPV ethics committee; in the second year, considering all the experience accumulated in the first year, the evaluation was made in full to all the groups in which the teachers of the subject taught. Thus, in the first year, the sample was a subset of 30 students from two practice groups and was compared with the rest of the students in the subject. As anticipated in the weakness analysis, despite efforts to encourage participation, most of the students who were repeating the subject were reluctant to engage in the project. Thus, having a tried and tested methodology, in the second year, video assessment was applied in two groups (50 students) of the seven students in the subject in a general way, not as a voluntary activity but as a general assessment.

The videos were assessed, and then uploaded to the social network if there were no conceptual errors. One part of the mark was reserved for the video’s success on the social network, and the other for the correct resolution of the problem set. After a few weeks, the video’s success is checked (through views and “likes”) and finally evaluated.

A series of student surveys were carried out during each academic year to determine the proposal’s effectiveness. Three surveys were carried out in the first year of the project and two in the second. It is important to highlight that one of the weaknesses was collecting evidence. Therefore, in the second year, the number of surveys was reduced to avoid saturation of the student body. These surveys seek feedback on the student’s perception of the different assessment methodologies (traditional vs. video) and the time spent on them.

Thanks to this feedback, the number of videos required, their format, and their content were modified until the new methodology received high evaluations and reasonable working times for students were achieved. Additionally, Excel sheets were created to facilitate the quick evaluation of videos by teachers. These sheets have been refined and improved over the years of the project’s implementation.

The main conclusion of the project is that students value the video assessment much more highly than the traditional PoliformaT test assessment. Despite the time spent on the activity being a drawback, students have responded positively to the new method. To address this, PowerPoint templates have been developed to generate the videos with the minimum desired content. These templates have been very positively received by the students and will be the vehicle for their final implementation in the course, instilling optimism for the future of the project.

It is foreseen that, during the academic year 2024/2025, video assessment will be implemented in all the laboratory practices of the subject.

3 Data collection

Students were surveyed regarding their perception of the assessment methodology as opposed to the traditional methodology. Three questions were asked: the time spent on each assessment and their perception of it, their motivation for the type of assessment, and their perceived fairness. All the questions are shown in Table 1. For the non-numerical assessment questions, an equivalence rating scale of 1 to 4 (using a Likert scale) was used:

• Strongly disagree - 1 point.

• Disagree - 2 points.

• Agree - 3 points.

• Strongly agree - 4 points.

Table 1
www.frontiersin.org

Table 1. Questions in the student survey.

All survey data were processed statistically by extracting the mean, standard deviation, and median to study these parameters.

Additionally to the surveys, the marks of students participating in this project were compared to the rest of the students coursing the same subject to study if the difference in the assessment process impacted the students’ marks.

4 Results

In order to check the effectiveness of the proposal, data from surveys were analyzed. Moreover, indicators from the used social networks were collected to study interactivity, and teachers’ perception was also included in the study.

Table 2 shows the results of all the surveys made by the students. The first item to be analyzed is the time spent by the students to fulfill the video creation task. In the first year of the project (course 2022/2023), two or three videos were requested depending on the selected practice, so it is interesting to know the data per video as well as the overall time spent on the assessment. In the academic year 2023/2024, only one video per practice was requested. The traditional evaluation, which refers to the standard assessment method, always lasts the same: 30 min.

Table 2
www.frontiersin.org

Table 2. Survey results.

The survey data show that in the first year of the project (2022/2023), where students had much flexibility to generate the videos, the mean and median time spent generating the videos was much higher than for the traditional assessment (30 min). Interviews were conducted with students who stated that this time was too long compared to traditional assessment. However, the time per video was adequate. The high variability in the standard deviations, with a mean deviation between 40 and 60% of the mean, adds a layer of complexity to the assessment process. This fact indicates that some students made the videos much more accessible than others. This parameter needs to be reduced to provide equal effort for each student.

On the other hand, in the second year (2023/204), only one video per session was requested. Nevertheless, more video content had to be added than in the previous year. Although the time spent on making the video is higher than in the traditional assessment, it can be seen that the overall time effort of the students is less than in the previous year. Reducing time and effort could lead to more efficient learning and assessment processes. The standard deviations move in the same ranges as in the previous year. However, the absolute time is less, and this significant variability is a value to be reduced in subsequent years.

Another fact to be studied is the student’s perception of the time they spent making the videos. Students positively evaluated the time devoted to video evaluation, although the average evaluation drops somewhat as the subject progresses. It is understood that students have less and less time to carry out any task due to the work and assignments coming from other subjects, so they begin to value somewhat less the extra time they have to dedicate to generating videos. Students seem to be okay with the time spent on traditional assessment, as it is only 30 min per practice session and is integrated into the practice time.

Students agreed with their grades in the video assessment regarding the grading perception. This perception is also reflected in the students’ comments. As diagnosed before the project started, students value the marks obtained with the traditional assessment less highly. The average mark goes down as the subject progresses, as they see that their effort is not reflected in the mark. It is also interesting to note how the assessment of their mark levels out in the second year in the last survey. This circumstance may be because students have achieved a higher level in the subject and are aware of the assessment they should have, regardless of the methodology.

Regarding motivation, the survey reveals a significant aspect of initial high levels of student motivation. In the first survey of each year, the median motivation rating is at its peak. While this rating modifies as the course progresses and students face increasing workloads, it remains notably high compared to traditional assessment. The results on student motivation for traditional assessment could be more motivating. On average, students disagree that traditional assessment is motivating.

From the point of view of grades, in the first year, there were no appreciable differential results between groups with different methodologies. Students who participated in the project obtained an average laboratory practice grade of 7.6 out of 10, while those who did not participate finished with an average of 7.5. While the marks in the video-assessed practices were higher in the project groups, their marks dropped when they were to be assessed by the traditional tests. In contrast, the students who did not participate in the project adapted better to the test-based assessment and matched the averages. In the second course, we worked on the design of the tests so that the problem to be solved would be similar to the one assessed in the tests. In this way, the change to traditional assessment was manageable. This simplification improved the average marks for the internship in the project group, which reached 7.44 out of 10, higher than the 6.86 achieved by the students who did not participate in the project.

In the surveys, a field was left open for comments about the project, and very positive comments were obtained. A sample of the comments is shown below:

• ‘I find the new evaluation more dynamic, stimulating and motivating’.

• ‘This new format of the videos encourages the motivation of the students to learn the subjects in a more enjoyable way.’

• ‘The videos are a bit long, but I think you learn more than with the exam.’

• ‘I prefer making videos, as I learn more, and it is more entertaining.’

• ‘I prefer the videos as you have more time to prepare them better, the tests you gamble everything on the slightest mistake as they are tests of inserting a specific value.’

All the evidence presented above indicates that, from the student’s point of view, video assessment is more motivating and improves learning. However, the time spent on video assessment is somewhat longer than that of traditional assessment.

Performance of videos on social networks could have worked better. As said before, videos were uploaded to a TikTok account during the project’s first year, and the 6 weeks raised the account’s statistics significantly because students were excited about the new methodology. Figure 1 shows the statistics during these 6 weeks. Nevertheless, students recovered their social media habits after this period, and the account statistics remained stable. Not even the minimum number of likes needed to increase the video mark was obtained for most videos. Regarding the YouTube amount created for the project’s second year, the channel has achieved 1.181 views with an added time of 15,5 h of visualization. In this case, because the working team decided to filter more the videos to avoid conceptual mistakes in the uploaded content, less students had their videos online, so the use of the social network was lower.

Figure 1
www.frontiersin.org

Figure 1. First six-week TikTok account statistics: (A) Followers; (B) Likes; (C) Views; (D) Comments.

Finally, from the teacher’s point of view, video assessment has advantages and disadvantages compared to video assessment, although the advantages far outweigh the disadvantages. The time spent to assess the videos is significantly higher than the traditional assessment time, and the time devoted to solving doubts concerning the videos is also meaningful. Nevertheless, the better response of the students to the new methodology, which results in a better classroom atmosphere, favors the learning of all the students in the group.

4.1 Partial credit model analysis

To get more profound results, we performed a partial credit model (PCM) (Hambleton, 2015) analysis of the survey data in R (Li and Baser, 2012) because the methodology in the project’s second year is more mature than the first one, we have focused on data of this course (2023/2024).

We will focus the PCM analysis on questions Q3 and Q6, which point to the key research objectives. Table 3 shows the analysis results, and Figure 2 shows the probabilities of each response category in the model. The analysis shows that students consider making videos more motivating than the classical test. Figure 2 show that category 4 (strongly agree) is the most probable category. Moreover, responses are mainly polarized into category 4 and 1 (strongly disagree). At the end of the course, when students have tested the proposed video methodology and the traditional one several times (Figure 2C), category one disappears, so students move their opinion more favorably to the video methodology.

Table 3
www.frontiersin.org

Table 3. PCM model data in the second year.

Figure 2
www.frontiersin.org

Figure 2. Items in the PCM of questions Q3 and Q6. (A) 2024 survey 1 Q3; (B) 2024 survey 1 Q6; (C) 2024 survey 2 Q3; (D) 2024 survey 2 Q6.

Figures 2B,D show the evolution of the motivation perception of the traditional test methodology. Just before the first evaluation (Figure 2B), the most probable response is category 1 (strongly disagree), so they do not feel motivated by this kind of evaluation. At the end of the course (second survey), category four is shown, but the low motivation response for this evaluation is still present.

Therefore, these results reinforce the conclusions of the descriptive data analysis, pointing out that the video-based evaluation methodology is much more motivating than the traditional one.

4.2 Final methodology

After 2 years of collaborative work on the project, and considering the evidence presented, we can confidently conclude that the main objective of our joint project has been achieved: ‘To design a methodology to evaluate the knowledge acquired by students through videos for social networks’. This methodology, which we have collectively developed, involves the following steps:

• Video requirements documentation. For each laboratory practice to be evaluated by videos, a document must be created in which the following must be clearly defined:

• Statement of the problem to be solved. The Statement is designed to be adaptable, allowing each group to tackle a unique problem, thereby empowering them to take ownership of their learning.

• Steps to solve the problem.

• Evaluation rubric.

• Deadline for creating the video.

• Video template. A PowerPoint template should be generated that includes:

• Cover page: with a space for students’ names.

• Slides with each of the minimum steps. o Evaluation Excel template.

• Assessment Excel template. The Excel sheet should include each point of the rubric with its assessment, space for each student in each group, and the final grade calculation.

• Students will upload the videos within the deadline to PoliformaT; it has been determined that 3 days is enough time for this. Once finished, the correction should be done before the next practice so that students have early feedback.

• Many of the videos include errors or defects that prevent them from being shared on social media, so sharing via social media seems optional. TikTok and YouTube have been tested. The team concluded that the final implementation could be done without social networks.

5 Discussion

After 2 years of the project, several reflections on the methodology have been developed during the team meetings.

5.1 Definition of assessment

With the proposed assessment method all the concepts developed in the practical lesson should not appear in the video. The problem posed to students should encompass the knowledge developed in practice, but not all concepts must be presented. Shorter assessments that require more explanation and reflection from the student are more interesting from a formative point of view and generate shorter but higher-quality videos.

5.2 Video format

The format proven to be most effective and popular with students is a PowerPoint skeleton with the essential elements the video should contain. This skeleton makes it easier for the student to organize the video and generates more videos with the desired content.

5.3 Correction and assessment methodology

The evaluation rubric is obviously necessary for the student. For the teacher, Excel sheets have been developed with the different items of the rubric that allow the quick evaluation of each scoring characteristic.

5.4 Feedback on grades

The uploading of grades to the UPV platform must be automated. Many potential students require an automated procedure to deal with the data to avoid losing any marks.

5.5 Social media factor

Uploading videos to social media may seem motivating initially, but it must be more relevant for assessment. Students value the actual generation of the video more than the viewing and sharing on social media. In addition, adding marks for “likes” is neither easy with large volumes of videos nor motivating for students.

5.6 Video as an element of assessment

The classroom dynamic of video generation gives much play and generates a good atmosphere, allowing students to demonstrate their knowledge and engage with the subject. Students perceive a fairer assessment, as they can show the whole development of the problems.

5.7 Time spent on assessment

Unlike traditional assessment, where the teacher does not have to spend any time directly, video assessment involves additional work. However, the benefits explained throughout the reflections and the evidence presented in the surveys far outweigh the disadvantages of spending this additional time.

After the analysis presented, it can be concluded that all the results of improving student motivation and learning indicate that this objective is achieved with the methodology generated. Both surveys and student testimonials point in this direction. The videos generated during the project will serve as a reference after a selection period for subsequent courses. Hopefully, these resources will help students and enable them to improve their learning.

This methodology has been designed and prepared for the subject Basic Control, a core subject in the second year, with many students. This means it can be extrapolated to subjects of similar courses and subjects with fewer students (Master’s courses).

In the next academic year, the video assessment methodology will be implemented in all the laboratory practices of the subject, thus changing how the laboratory practices are assessed for these groups.

6 Conclusion

A novel assessment method has been presented based on multimedia proofs and social network sharing for practical lessons of introductory engineering courses. Shorter assessments that require more explanation and reflection from the student are more interesting from a formative point of view. The assessment method consists of generating a video with the practical lessons results. In the video, not all contents developed in the practical lessons are presented but the most important ones should appear. Students has a skeleton with the essential elements the video should contain. This skeleton makes it easier for the student to organize the video and generates more videos with the desired content. Correction and assessment methodology of videos is based in an evaluation rubric that the student knows.

It is essential to underline that this study works with a representative sample of students regarding the number of students in the course, but the sample is small in general terms. Results point to a general acceptance of the methodology, but more research with more students and bigger groups has to be done. Future work will follow this, testing the method with more extensive samples and sophisticated analysis tools as generalized partial credit models.

Video generation motivates students to demonstrate their knowledge and engage with the subject. The proposed assessment method is completely aligned with the student way of life since social media is a big part of the daily life for lots of students and this fact motivates the student learning process since generating new high impact social media contents is a very challenging task.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

AC: Conceptualization, Data curation, Funding acquisition, Investigation, Project administration, Supervision, Validation, Writing – original draft, Writing – review & editing. CR-V: Conceptualization, Funding acquisition, Investigation, Project administration, Supervision, Validation, Writing – original draft, Writing – review & editing. EI: Formal analysis, Investigation, Methodology, Validation, Writing – review & editing. FC: Formal analysis, Investigation, Validation, Writing – review & editing. CB: Formal analysis, Investigation, Validation, Writing – review & editing. JM-T: Formal analysis, Investigation, Methodology, Validation, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This publication has been funded by Universitat Politècnica de València through the program: "Convocatoria de Proyectos de Innovación y Mejora Educativa. PROJECT Emergentes 2022." PROJECT-E-1854.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Arruabarrena, R., Sánchez, A., Blanco, J. M., Vadillo, J. A., and Usandizaga, I. (2019). Integration of good practices of active methodologies with the reuse of student-generated content. Int. J. Educ. Technol. High. Educ. 16:10. doi: 10.1186/s41239-019-0140-7

Crossref Full Text | Google Scholar

Bennett, R. E., Persky, H., Weiss, A., and Jenkins, F. (2010). Measuring problem solving with technology: a demonstration study for NAEP. J. Technol. Learn. Assess. 8, 1–45.

Google Scholar

Clarke-Midura, J., and Dede, C. (2010). Assessment, technology, and change. JRTE 42:3.

Google Scholar

Coyne, S. M., Rogers, A. A., Zurcher, J. D., Stockdale, L., and Booth, M. (2020). Does time spent using social media impact mental health?: an eight year longitudinal study. Comput. Hum. Behav. 104:106160. doi: 10.1016/j.chb.2019.106160

Crossref Full Text | Google Scholar

de Koning, B., Loyens, S., Rikers, R., Smeets, G., and van der Molen, H. (2012). Generation Psy: Student characteristics and academic achievement in a three-year problem-based learning bachelor program. Learn. Individ. Differ. 22, 313–323. doi: 10.1016/j.lindif.2012.01.003

Crossref Full Text | Google Scholar

Douglass, J. A., Thomson, G., and Zhao, C. M. The learning outcomes race: the value of self-reported gains in large research universities. High Educ. 64, 317–335 (2012). doi: 10.1007/s10734-011-9496-x

Crossref Full Text | Google Scholar

Eisenkraft, A. (2013). Closing the Gap. Science Teacher, 80, 42–45.

Google Scholar

Garden, R. A. (1999). Development of TIMSS performance assessment tasks. Stud. Educ. Eval. 25, 217–241. doi: 10.1016/S0191-491X(99)00023-1

Crossref Full Text | Google Scholar

Gobert, J. D., Sao Pedro, M., Raziuddin, J., and Baker, R. S. (2013). From log files to assessment metrics: measuring students’ science inquiry skills using educational data mining. J. Learn. Sci. 22, 521–563. doi: 10.1080/10508406.2013.837391

Crossref Full Text | Google Scholar

Hambleton, R. K. (2015). Handbook of item response theory (W. J. LindenVan der). New York, NY: CRC Press.

Google Scholar

Kassim, H., Nicholas, H., and Ng, W. (2014). Using a multimedia learning tool to improve creative performance. Think. Skills Creat. 13, 9–19. doi: 10.1016/j.tsc.2014.02.004

Crossref Full Text | Google Scholar

Ketelhut, D. J., Dede, C., Clarke-Midura, J., and Nelson, B. (2007). “Studying situated learning in a multi- user virtual environment” in Assessment of problem solving using simulations. eds. E. Baker, J. Dickieson, W. Wulfeck, and H. O’Neil, New Jersey: Lawrence Erlbaum Associates. 37–58.

Google Scholar

Kuo, C. Y., Wu, H. K., Jen, T. H., and Hsu, Y. S. (2015). Development and validation of a multimedia-based assessment of scientific inquiry abilities. Int. J. Sci. Educ. 37, 2326–2357. doi: 10.1080/09500693.2015.1078521

Crossref Full Text | Google Scholar

Li, J., Antonenko, P. D., and Wang, J. (2019). Trends and issues in multimedia learning research in 1996–2016: a bibliometric analysis. Educ. Res. Rev. 28:100282. doi: 10.1016/j.edurev.2019.100282

Crossref Full Text | Google Scholar

Li, Y., and Baser, R. (2012). Using R and win BUGS to fit a generalized partial credit model for developing and evaluating patient-reported outcomes assessments. Stat. Med. 31, 2010–2026. doi: 10.1002/sim.4475

PubMed Abstract | Crossref Full Text | Google Scholar

Lile, R., and Bran, C. (2014). The Assessment of Learning Outcomes. Procedia Soc Behav Sci. 163, 125–131, doi: 10.1016/j.sbspro.2014.12.297

Crossref Full Text | Google Scholar

Lubis, L., Febriani, B., Yana, R., Azhar, A., and Darajat, M. (2023). The Use of Learning Media and its Effect on Improving the Quality of Student Learning Outcomes. Int. J. Educ. Soc. Sci. (IJESSM). 3. 7–14. doi: 10.52121/ijessm.v3i2.148

Crossref Full Text | Google Scholar

Lumadi, M. W. (2013). Challenges Besetting Teachers in Classroom Assessment: An Exploratory Perspective. J. Soc. Sci. 34, 211–221. doi: 10.1080/09718923.2013.11893132

Crossref Full Text | Google Scholar

Manzoor, A. (2016). Technology-enabled learning environments.. In Handbook of research on applied learning theory and design in modern education, IGI Global. New York. 545–559.

Google Scholar

Mayer, R. E. (2017). Using multimedia for e-learning. J. Comput. Assist. Learn. 33, 403–423. doi: 10.1111/jcal.12197

Crossref Full Text | Google Scholar

Mayer, R. E., Bove, W., Bryman, A., Mars, R., and Tapangco, L. (1996). When less is more: meaningful learning from visual and verbal summaries of science textbook lessons. J. Educ. Psychol. 88, 64–73. doi: 10.1037/0022-0663.88.1.64

Crossref Full Text | Google Scholar

Mtshali, T. I., Ramaligela, S. M., and Makgato, M. (2021). Actualisation of practical lessons through assessment in civil technology. J. Tech. Educ Train. 13, 44–55.

Google Scholar

National Research Council (2001). Knowing what students know: The science and Design of Educational Assessment. Washington, DC: The National Academies Press.

Google Scholar

Popat, A., and Tarrant, C. (2023). Exploring adolescents’ perspectives on social media and mental health and well-being – a qualitative literature review. Clin. Child Psychol. Psychiatry 28, 323–337. doi: 10.1177/13591045221092884

PubMed Abstract | Crossref Full Text | Google Scholar

Quellmalz, E. S., and Pellegrino, J. W. (2009). Technology and testing. Science 323, 75–79. doi: 10.1126/science.1168046

Crossref Full Text | Google Scholar

Quellmalz, E. S., Timms, M. J., Silberglitt, M. D., and Buckley, B. C. (2012). Science assessments for all: integrating science simulations into balanced state science assessment systems. J. Res. Sci. Teach. 49, 363–393. doi: 10.1002/tea.21005

Crossref Full Text | Google Scholar

Raja, R., and Nagasubramani, P. C. (2018). Recent trend of teaching methods in education" organised by Sri Sai Bharath College of Education Dindigul-624710. India J. Appl. Adv. Res. 2018, 33–35. doi: 10.21839/jaar.2018.v3S1.165

Crossref Full Text | Google Scholar

Richardson, A., and Blair, E. (2015). Understanding practical engagement: Perspectives of undergraduate civil engineering students who actively engage with laboratory practicals. The Caribbean Teaching Scholar, 5.

Google Scholar

Richter, J., Scheiter, K., and Eitel, A. (2016). Signaling text-picture relations in multimedia learning: a comprehensive meta-analysis. Educ. Res. Rev. 17, 19–36. doi: 10.1016/j.edurev.2015.12.003

Crossref Full Text | Google Scholar

Ruiz-Primo, M. A., and Shavelson, R. J. (1996). Problems and issues in the use of concept maps in science assessment. J. Res. Sci. Teach. 33, 569–600. doi: 10.1002/(SICI)1098-2736(199608)33:6<569::AID-TEA1>3.0.CO;2-M

Crossref Full Text | Google Scholar

Sato, M., Wei, R. C., and Darling-Hammond, L. (2008). Improving Teachers’ Assessment Practices through Professional Development: The Case of National Board Certification. Am. Educ. Res. J. 45, 669–700. doi: 10.3102/0002831208316955

Crossref Full Text | Google Scholar

Saunders, L. (2012). Faculty Perspectives on Information Literacy as a Student Learning Outcome. J. Acad. Librariansh. 38, 226–236. doi: 10.1016/j.acalib.2012.06.001

Crossref Full Text | Google Scholar

Sliney, A., and Murphy, D. (2011). “Using serious games for assessment” in Serious games and edutainment applications. eds. M. Ma, A. Oikonomou, and L. C. Jain (London: Springer), 225–243.

Google Scholar

Susi, T., Johannesson, M., and Backlund, P. (2007). Serious games: An overview. Technical Report HS- IKI -TR-07-001 School of Humanities and Informatics. Sweden: University of Skövde. Available at: https://www.diva-portal.org/smash/get/diva2:2416/FULLTEXT01.pdf

Google Scholar

Timmerman, B. E. C., Strickland, D. C., Johnson, R. L., and Payne, J. R. (2010). Development of a ‘universal’ rubric for assessing undergraduates’ scientific reasoning skills using scientific writing. Assess. Eval. High. Educ. 36, 509–547. doi: 10.1080/02602930903540991

Crossref Full Text | Google Scholar

Tremblay, K. (2013). OECD Assessment of Higher Education Learning Outcomes (AHELO). In: S. Blömeke, O. Zlatkin-Troitschanskaia, C. Kuhn, and J. Fege (eds) Modeling and Measuring Competencies in Higher Education. Professional and Vet Learning, vol 1. SensePublishers, Rotterdam.

Google Scholar

Ubong, B., and Oguzor, N. S. (2007). Vocational Education and the Development of Adult Learners in Nigeria Through Skills Acquisition Schemes. Bulg. J. Sci. Educ. Policy. 1.

Google Scholar

Valkenburg, P. M., Meier, A., and Beyens, I. (2022). Social media use and its impact on adolescent mental health: an umbrella review of the evidence. Curr. Opin. Psychol. 44, 58–68. doi: 10.1016/j.copsyc.2021.08.017

PubMed Abstract | Crossref Full Text | Google Scholar

Vinales, J. J. (2015). The learning environment and learning styles: a guide for mentors. Br. J. Nurs. 24, 454–457. doi: 10.12968/bjon.2015.24.8.454

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, Z., Sundararajan, N., Adesope, O. O., and Ardasheva, Y. (2017). Moderating the seductive details effect in multimedia learning with note-taking. Br. J. Educ. Technol. 48, 1380–1389. doi: 10.1111/bjet.12476

Crossref Full Text | Google Scholar

Weng, C., Otanga, S., Weng, A., and Cox, J. (2018). Effects of interactivity in E-textbooks on 7th graders science learning and cognitive load. Comput. Educ. 120, 172–184. doi: 10.1016/j.compedu.2018.02.008

Crossref Full Text | Google Scholar

Wolf, R., Zahner, D., and Benjamin, R. (2015). Methodological challenges in international comparative post-secondary assessment programs: lessons learned and the road ahead. Stud. High. Educ. 40, 471–481. doi: 10.1080/03075079.2015.1004239

Crossref Full Text | Google Scholar

Zlatkin-Troitschanskaia, O., Shavelson, R. J., and Kuhn, C. (2015). The international state of research on measurement of competency in higher education. Stud. High. Educ. 40, 393–411. doi: 10.1080/03075079.2015.1004241

Crossref Full Text | Google Scholar

Keywords: assessment, resource generation, social network, motivation, multimedia

Citation: Correcher A, Ricolfe-Viala C, Ivorra E, Cordero F, Blanes C and Martínez-Turégano J (2025) Innovative assessment based on multimedia proofs and social network sharing for introductory engineering courses. Front. Educ. 9:1468747. doi: 10.3389/feduc.2024.1468747

Received: 22 July 2024; Accepted: 18 December 2024;
Published: 08 January 2025.

Edited by:

Osbaldo Turpo Gebera, National University of Saint Augustine, Peru

Reviewed by:

Quan Zhang, Jiaxing University, China
Luis Alex Alzamora De Los Godos Urcia, Cesar Vallejo University, Peru

Copyright © 2025 Correcher, Ricolfe-Viala, Ivorra, Cordero, Blanes and Martínez-Turégano. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Carlos Ricolfe-Viala, Y3JpY29sZmVAYWkyLnVwdi5lcw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.