Skip to main content

The effects of using analytical rubrics in peer and self-assessment on EFL students’ writing proficiency: a Vietnamese contextual study

Abstract

This research investigates the effectiveness of utilizing analytic rubrics in peer-assessment (PA) and self-assessment (SA) methodologies to enhance the proficiency of English as a Foreign Language (EFL) students’ essay writing skills in the Vietnamese context. It further contributes to the existing body of literature regarding formative assessment and its potential to improve student learning outcomes. A total of 44 university students, all English majors, were divided into two distinct groups, each consisting of 22 participants. One group applied analytic rubrics for SA, while the other used the same tool for PA. The writing performance of the two groups was assessed and compared in pre and post-tests. The findings revealed no significant difference between the SA and PA groups in the pre-test. However, in the post-test, the SA group demonstrated significantly superior performance compared to the PA group, with noticeable improvements across all evaluated criteria. Moreover, these results showed that the use of analytic rubrics in SA and PA methods positively impacted the EFL students’ writing skills, particularly in the areas of content and language use. This has practical implications for teachers, curriculum developers, and policymakers in designing and implementing formative assessment strategies for EFL learners. Further research is needed to examine the long-term effects of employing analytic rubrics, and to understand the potential influence of other contextual factors on student learning outcomes.

Introduction

Teaching English as a Foreign Language (EFL) in Vietnam is facing many issues, despite various attempts to improve the system (Le & Chen, 2018). The traditional way of focusing mainly on grammar has led students to struggle in expressing their ideas in English, both verbally and in writing (Dao & Newton, 2021). Particularly, there are several problems with EFL assessments in this context, one of them being the lack of accuracy and consistency. According to Pham (2016), the traditional assessment methods in Vietnamese classrooms, which lean heavily on rote memorization and repetitive exercises, often fail to provide a clear picture of students’ true linguistic capabilities. These methods, while efficient in assessing memorization skills, often overlook students’ abilities to apply the language in real-world contexts. This is further worsened by the overemphasis on final or ‘summative’ assessments, which overlooks the important role of ongoing or ‘formative’ assessments in supporting learning (Can, 2019; Nguyen & Truong, 2021). Consequently, there is an inconsistency between the students’ scores in classroom assessments and their actual proficiency in English. Further, as Ai et al. (2019) pointed out, due to limited training and standardization processes for EFL educators in Vietnam, there can be a notable variability in assessment standards across different institutions and regions. Such disparities can result in students receiving varying feedback and grades for similar performances.

Formative assessments, which include self-assessment (SA) and peer-assessment (PA), are considered a promising approach to improving the teaching and learning process (Panadero et al., 2016). Also, formative assessments like SA and PA could potentially help improve EFL students’ writing skills, which is a notable challenge (Phuket & Othman, 2015). Additionally, formative assessments offer EFL teachers a way to handle the complex issue of evaluating students’ writing skills (Ölmezer-Öztürk & Aydin, 2019).

Alongside SA and PA, scoring rubrics also play a critical role in evaluating writing. Using scoring rubrics provides clear guidelines for assessment and can be applied in both SA and PA situations (Yamanishi et al., 2019). With this in mind, the main focus of this study is to examine the effects of teaching students how to use analytical rubrics for SA and PA on their writing performance. In doing so, this research is expected to provide useful ideas for more effective EFL teaching and assessment strategies in Vietnam.

Literature review

Analytical rubrics

The role of analytical rubrics in the assessment of student work forms an integral part of contemporary pedagogy. Jones et al. (2017) argue that these rubrics are particularly effective for evaluating students’ written and oral productions. As per the definition by Çetin (2011), rubrics serve as scoring tools that can guide the evaluation of various assignments. By allocating specific points for each category within the rubric, raters are provided with a precise and comprehensive guideline for the analysis and scoring of a text.

Besides, the use of rubrics can enhance both the reliability and the professionalization of writing assessments (Jonsson & Svingby, 2007). Through such an approach, the objective appraisal of student writing becomes a more streamlined process. Moreover, rubrics can be applied to a diverse range of assignments and offer a swift evaluation mechanism through their detailed subscales.

Analytic rubrics, in particular, have attracted the preference of numerous language teachers due to their capacity to provide multiple scales and scores for a performance. This preference has been reinforced by research, such as the study by Beyreli and Ari (2009), which confirmed the usefulness of analytic rubrics in the assessment of writing performance.

The adoption of analytic rubrics in writing assessment presents a viable strategy to gauge writing proficiency levels and improve written work. Through the provision of scoring feedback and self-correction mechanisms, these rubrics facilitate tangible improvements in writing performance (Çetin, 2011). The benefits of this assessment tool extend to both teachers and students, ensuring an efficient and effective evaluation process.

However, it is crucial for language teachers to exercise careful judgment when incorporating analytic rubrics into classroom use. The criteria for evaluating student writing should be thoughtfully considered, grounded in the knowledge of composition, text linguistics, and literature. In terms of practical applications, analytic rubrics may be most suitable for assessing written compositions, particularly when the goal is to measure writing enhancement over time.

Self-assessment

SA allows learners to introspectively evaluate the quality of their work and learning progress. According to Taylor et al. (2012), SA encompasses self-grading, self-testing, and self-rating. It requires learners to compare their performance against established criteria or standards. Despite its endorsement as a key component of formative evaluation by experts and the Assessment for Learning (AfL) movement, SA is seldom used in language classrooms.

SA carries considerable benefits for learners, such as academic improvement, activation of metacognitive abilities, and enhancement of self-regulation skills. Moreover, SA can boost learner motivation, engagement, and efficacy while simultaneously reducing teachers’ assessment workload (Boud et al., 2013; McCarthy et al., 1985; McMillan & Hearn, 2008; Yan et al., 2020). For the effective implementation of SA in language classrooms, according to Spiller (2012), teachers should have an initial dialogue with students about the principles and assumptions of SA. Moreover, teachers should clearly explain the procedures and rationale for SA activities, involve students in establishing evaluation criteria, and ensure students understand the standards they are striving to meet.

However, despite the myriad benefits, SA is not without limitations. One critical concern is the potential for over- or under-estimation of personal abilities (Carroll, 2020). Learners, especially those with lower proficiency levels, may lack the metacognitive skills needed to evaluate their performance accurately (Cuesta-Melo et al., 2022). The reliability of SA has also been questioned by scholars, as SA often exhibits lower reliability compared to teacher assessment or PA due to inherent subjectivity and potential bias (Li & Zhang, 2021). Anxiety induced by the responsibility of assessing personal work can also undermine the efficacy of SA (Çakmak et al., 2023). Finally, the success of SA depends on the clear and understandable construction of criteria, a task that can be challenging for students (Harris & Brown, 2018).

Peer-assessment

PA signifies a transition from a teacher-centered to a student-centered educational approach. It requires students to provide feedback on their peers’ work, based on established excellence criteria (Wride, 2017). PA is a valuable assessment form in English language teaching and learning as it allows students to collaborate in assessing their peers’ work, promotes active engagement in learning, and cultivates metacognitive and interpersonal skills (Spiller, 2012). Moreover, PA aligns with social constructivist education models, and it reduces the teacher’s grading workload, facilitating more efficient management of the assessment process (Wride, 2017).

PA offers several advantages in language instruction and learning. Primarily, it prepares learners for their professional futures by involving them in the decision-making process (Spiller, 2012). PA also enhances learning through peer learning and feedback, which strengthens writing skills, balances power dynamics, and develops the capacity for receiving and providing feedback (Le et al., 2023; Mustafa & Yusuf, 2022; Ritonga et al., 2022).

To implement PA effectively in language classrooms, anonymity is recommended for both the assessors and the assessed during the scoring process (Li et al., 2012). This multi-step process enriches students’ collaborative learning experience and permits objective marking. Therefore, PA is a valuable tool for fostering students’ collaborative learning and a potential alternative assessment method for educators in language classrooms.

Despite its numerous advantages, PA has its own drawbacks. According to Zhao (2018), a key challenge of PA is the potential lack of trust in peers’ judgments. Students may perceive their peers’ evaluations as less reliable or accurate than teacher assessments, compromising the credibility of PA. deBoer et al. (2023) also noted that PA may not benefit all learners, particularly those with lower proficiency who might lack the linguistic competence to provide accurate evaluations. This can lead to biased or inaccurate feedback, potentially discouraging learners who are still developing their language skills. Friendship bias is another concern, as social relationships may affect the assessment process, with students potentially inflating grades for friends or deflating grades for disliked peers (Alqarni & Alshakhi, 2021). This subjectivity could distort the assessment results.

PA also requires substantial time and effort from students to review and provide constructive feedback on peers’ work, potentially increasing student workload and detracting from other aspects of learning (Wang et al., 2016). Lastly, the fear of negative evaluation can impact the effectiveness of PA. Students may feel uncomfortable or anxious about giving and receiving critical feedback from their peers, negatively affecting their learning experience and the overall effectiveness of PA (Panadero & Alqassab, 2019).

Related studies

The potential and effectiveness of PA in English writing classrooms have been a topic of interest in educational research. In a seminal study by Topping (1998), the research was set in a variety of English writing classrooms, employing qualitative methods like classroom observations and student interviews. The findings indicated that PA, when appropriately structured, could rival the efficiency of instructor evaluations in enhancing students’ writing skills. This is particularly due to the collaborative environment it fosters, which bolsters critical thinking. However, implementation has its challenges. A study by Lundstrom and Baker (2009) conducted in American secondary schools used mixed methods, including student surveys and analysis of revised drafts, to explore these challenges. Their findings highlighted that without clear guidelines and adequate training, the effectiveness of peer-assessment might be compromised. Another critical study by Cho and MacArthur (2010) conducted in a university setting employed experimental methods, wherein students’ drafts were evaluated both before and after peer review sessions. Their findings strongly suggested that peer reviews substantially improved the quality of revisions, but the feedback’s effectiveness was contingent on its specificity and actionability.

Parallel to PA, SA is lauded for its potential to promote student autonomy and introspective reflection. Blanche and Merino’s (1989) study surveyed college students enrolled in English courses, aiming to understand the dynamics of SA. Their results suggested that students who consistently engaged in SA demonstrated heightened awareness of their writing strengths and pitfalls. However, the reliability of SA has been contested. Ross (1998) carried out a longitudinal study in Canadian middle schools, employing quantitative methods to compare students’ SA with instructor grades. The findings raised concerns regarding the potential for students to overestimate or underestimate their writing abilities, suggesting that SA, while valuable, might require supplementary evaluation methods to ensure accuracy.

Despite the prominence of PA and SA in the literature, the potential role of rubrics, especially analytical ones, remains less explored. A notable study by Reddy and Andrade (2010) set in American high schools employed experimental methods to compare the effectiveness of holistic versus analytical rubrics. While both were found beneficial, analytical rubrics, with their detailed breakdown of assessment criteria, allowed for richer, more detailed feedback. However, this potential has not been thoroughly explored, especially in diverse contexts.

Vietnam’s traditional educational landscape is dominated by teacher-led methods, but there is an emerging inclination towards learner-centered approaches, as observed by Nguyen (2011) in a qualitative study spanning multiple Vietnamese universities. Through interviews and classroom observations, Nguyen noted the gradual acceptance of PA and SA. Yet, the research notably lacked an exploration into the role of analytical rubrics in such assessments.

Existing studies offer substantial insights into PA and SA in English writing. However, the critical research gap lies in the limited emphasis on the importance of analytical rubrics, a gap even more pronounced in the context of Vietnam. This current study, therefore, aspires to bridge this chasm, investigating the undiscovered benefits of analytical rubrics in shaping the assessment experience for Vietnamese English learners.

Methods

Research design

This study employed an experimental research design to delve into the impact of using analytic rubrics in PA and SA on students’ writing performances. Instead of focusing on the overall effects of the two types of treatments (SA and PA with the analytic rubrics) on two distinct student groups, the primary objective was to compare the outcomes from these two methodologies. The aim was not to assert an overarching influence of both treatments but rather to compare their individual effects and discern any variance between them. The choice to employ an experimental research design to assess the impact of analytic rubrics on PA and SA is consistent with research methodologies that emphasize control and causality (Creswell & Creswell, 2017). By focusing on comparing outcomes from PA and SA methodologies rather than their holistic impacts, this approach aligns with recommendations to draw nuanced, comparative findings in educational research (Fraenkel et al., 2012).

Additionally, the absence of a control group in this design did warrant further clarification. The purpose of this research was not to evaluate the baseline efficacy of either SA or PA when used independently, but rather to investigate and compare the respective impacts when facilitated by analytic rubrics. The researchers were interested in the differential impacts of the SA and PA methodologies when underpinned by the same rubric framework. Not having a control group is a design choice that has precedent in experimental research, especially when the primary interest lies in comparing two active treatments rather than comparing a treatment against a neutral condition (Cook et al., 2002). The crux of this study was to understand the differential impacts of SA and PA, both undergirded by the same analytic rubric, which is why a direct comparison between the two made sense in the context of this research design.

In this research design, two treatment groups were established. One group incorporated an analytic rubric within a PA framework, while the other applied the same rubric within an SA framework. The comparative performance of the two groups served to highlight any differential impact that the two assessment methodologies may have on writing performance when utilizing the same analytic rubric. Establishing two treatment groups, one utilizing PA and the other SA—both facilitated by the same analytic rubric—is a methodological approach that aids in eliminating confounding variables, ensuring that any observed differences in outcomes can be attributed to the assessment methods themselves rather than differences in rubric use (Bryman, 2016).

In order to address the research question— “How does the use of analytic rubrics for PA and SA influence students’ essay writing performance?”—this experimental research design was chosen. Students’ essays were compared and analyzed. The application of both descriptive and inferential statistical analyses allowed for a rigorous evaluation of the data, with the results interpreted in light of the research question and objectives. Choosing this experimental design to address the posed research question is in line with recommendations to employ methodologies that allow for a robust assessment of causality (Cohen et al., 2013). Further, the use of both descriptive and inferential statistical analyses is a comprehensive approach advocated by many in the field, ensuring a rigorous and in-depth evaluation of collected data (Pallant, 2020).

Participants

In this study, the participants comprised 44 English-major students from a reputable university in Southwest Vietnam. The participants were divided equally into two groups, each receiving a distinct treatment method: one group was trained to use analytic rubrics for SA, and the other for PA. The participants were initially ranked according to their pretest results, and then alternately assigned to either the SA or PA group to maintain balance in proficiency levels across the groups.

These treatments were administered over a 17-week period during which all participants were engaged in the same set of writing modules designed to enhance their essay writing skills across various topics. Regarding the content of the English writing tasks, the students were taught to write academic essays that covered a range of general topics. However, specific emphasis was given to the drafting of treaties as part of the module, given their relevance to the English-major students’ learning objectives.

The implementation of the analytic rubrics, which were adopted and adapted from Jacobs et al.’s (1981) work, took place within the learning modules. In addressing the Vietnamese educational context, several thoughtful modifications were made to Jacobs et al.’s (1981) original analytical rubrics. Firstly, to accommodate linguistic differences and ensure clarity, specific terminologies and language structures within the rubrics were either translated or simplified, with an emphasis on removing or elucidating technical jargon. Secondly, the rubrics adapters made cultural adaptations, especially concerning content and organization. These changes respected Vietnamese academic traditions and cultural nuances in written communication, such as the emphasis on storytelling or typical argumentative structures. Thirdly, the grading scale of the rubrics was reshaped to range from “excellent” to “very poor”, a categorization that Vietnamese students find familiar. This decision aimed at facilitating ease of use and reducing potential grading ambiguities. Lastly, to further enhance the rubric’s accessibility and instructional value, context-relevant examples were integrated under each assessment criterion, offering a tangible reference point for users. Before implementing these rubrics in the main study, a pilot phase was conducted involving a select group of students and teachers to validate their appropriateness and clarity. Feedback from this phase led to further refinements, ensuring the resultant rubrics maintained the foundational principles of the original while being tailored to Vietnamese EFL learners’ unique needs and context.

These rubrics, designed for a structured and systematic evaluation of writing quality, were introduced and explained to the students at the outset. The students in the PA group used these rubrics to assess their peers’ writing performances, while those in the SA group used them for evaluating their own writing works. The rubrics were based on five primary criteria: content (30%), organization (20%), vocabulary (20%), language use (25%), and mechanics (5%). Each criterion was clearly defined, and the assessment scores ranged from ‘excellent’ to ‘very poor’. The details of the rubrics are displayed in Table 1.

Table 1 Analytic rubrics

Jacobs et al.’s (1981) analytic rubric was chosen for its relevance and adaptability to this research context. Its established reputation in the field of English language teaching and learning, along with its comprehensive structure that addresses multiple facets of writing, made it a suitable tool for this research. The detailed criteria provide valuable guidelines for students to make accurate and informed judgments about their work and their peers’ work, aligning with the primary focus of the study. The applicability of Jacobs et al.’s (1981) rubric extends to diverse EFL settings, including Asian contexts similar to our study setting, reinforcing its suitability (Hamp-Lyons & Henning, 1991; Weigle, 2002). Additionally, its clear structure and easily understandable criteria cater well to the EFL student participants who might be new to the concepts of SA and PA. Most importantly, in designing an English writing course, the alignment of learning contents with assessment rubrics is paramount to achieve desired outcomes. It is vital to choose a rubric that not only holds esteem in the academic community but also matches the specific objectives and criteria outlined in the course’s curriculum. While reputation can serve as a guiding factor, it should not overshadow the need for alignment with teaching and learning goals. The English writing program’s core objectives aim to equip the students in this educational setting with a mastery of writing fundamentals, hone their critical thinking and argumentation skills, enhance self-reflection and peer-review abilities, ensure effective organization and structuring of their pieces, and elevate their writing style and voice. Jacobs et al.’s (1981) analytic rubric aligns seamlessly with these objectives. Its criteria for content, organization, language use, vocabulary, and mechanics reflect the program’s multifaceted goals, ensuring students are assessed holistically. Particularly, the rubric’s emphasis on cohesion, logical argumentation, and its inherent design encouraging SA and PA resonate with the program’s focus on critical thinking, self-reflection, and peer feedback. This congruence underscores the rubric’s suitability, making it an optimal tool to complement and enhance the program’s comprehensive assessment processes.

Given these reasons, the choice of the analytic rubrics was justified, providing a robust, validated, and user-friendly tool for this study. However, it is essential to note that while the rubrics were adapted to suit the specific Vietnamese contexts based on feedback from students and teachers, this might result in them being more customized and potentially less generic for a broader audience. As the study was designed, the influence of the rubrics-aligned writing modules and the introduction and use of the rubrics were recognized as integral parts of the research experiment. They were designed to synergize with the SA and PA methodologies, making their role in enhancing students’ achievement significant along with the assessment treatments. The rubric-guided teaching and assessment methodology, therefore, formed the fulcrum of this study, rather than being seen as separate variables.

Tests as data collection instrument

The methodology employed in this study utilized a pre-test and post-test design to gather data, with both tests requiring students to produce a 250-word essay within a span of one hour. Both assessments were designed around the theme of “Music”, aiming to provide a consistent thematic context to the writing tasks.

An expert in composition teaching with more than three years of experience designed the tests, paying specific attention to their structure and guidelines. It was vital to ensure the consistency of the writing tasks throughout the teaching process for an accurate comparative evaluation of students’ progress.

To maintain the structural parallelism between the pre-test and post-test, each test was designed with identical sections and formats. Both tests contained three sections: A brief introduction, a body comprising several paragraphs, and a concluding statement. This structure was deliberately chosen to maintain consistency, allowing for a fair comparison of the students’ writing skills at different points in time.

Despite the structural similarities, the essay prompts in the two tests were intentionally varied to prevent the possibility of students relying on memory, which could potentially affect the reliability of the results. The prompts were constructed to fall within the broader theme of “Music,” yet were sufficiently distinct to ensure that the students’ responses reflected their current understanding and ability, rather than recalling and reiterating previously formed ideas.

The pre-test was administered at the start of the study period before the implementation of the teaching modules and use of the rubrics. Conversely, the post-test was given at the end of the 17-week study period. The similarity in the structure of the pre-test and post-test, the consistency in the thematic content, and the variation in the essay prompts were all carefully considered to ensure the accuracy and reliability of the test results. This attention to detail in test design helps to minimize the impact of extraneous variables and underscores the validity of the comparison between the pre-test and post-test results.

Data analysis

The process of data analysis in this study unfolded through a series of stages. Initially, the teacher administering the intervention, in concert with two additional evaluators, assessed the students’ essays. They used identical analytical rubrics based on Jacobs et al. (1981), as demonstrated in Table 1. Subsequently, an analysis, conducted using SPSS version 20.0, was undertaken on the results from the pre and post-tests from the two groups, aiming to generate a thorough understanding of the intervention’s impact on the students’ writing abilities. An Independent-Sample t-test was also employed to compare the results across five different criteria between the two groups at both the pre-test and post-test phases. The outcomes from these tests were designed to illuminate the interventions’ effects on the students’ writing performance (Pallant, 2020). To further the analysis, the Paired-Sample t-test was utilized to compare the mean scores of the pre-test and post-test within each group. This test aimed to identify any significant differences in performance over time for the same group of students. Through the Paired-Sample t-test results, the study aimed to understand the effectiveness of the conditions on students’ writing improvements. In the context of this analysis, a p-value less than 0.05 was accepted as an indicator of statistical significance.

Procedures

This research was divided into three distinct stages, namely the pre-intervention stage, the intervention stage, and the post-intervention stage.

Pre-intervention

During the initial stage, the research team embarked on an in-depth review of existing literature to shape the conceptual framework for the research. Simultaneously, previous analytical models were assessed to determine the most appropriate model for the research context. After a careful consideration, the model adapted from Jacobs et al. (1981) was selected as the most suitable. After the establishment of the framework, the data collection instruments, specifically the tests, were developed. With the approval of a university in Southwest Vietnam, the research was conducted in two of their writing classes. Two writing teachers were approached to participate in the study, with one of them, pseudonymously called Mike, agreeing to instruct both research conditions. The research team briefed Mike about the research’s objectives, methodology, and implications and obtained his consent to participate. The 44 English-major students who voluntarily agreed to participate were then asked to take the pre-test, marking the beginning of the experimental phase of the research. The students were then divided into two equal groups based on their pre-test scores.

Intervention

The intervention phase spanned over a 17-week period, during which students were engaged in a 150-min per week regimen, equivalent to three periods as per the institutional norms. This phase was divided into two sub-stages. The first two weeks involved training the students to use the analytical model and understand the process of essay writing. The following 15 weeks comprised the experimental phase where students applied the analytical model for SA in the SA group and PA in the PA group.

During the initial two weeks, Mike followed a systematic approach to teach the students how to use the analytical model. To do so, Mike provided the students with writing samples that he had assessed using the chosen model. Students had the opportunity to review their evaluations of the sample papers against the scores Mike had given. Throughout this process, Mike thoroughly explained the model and the reasoning behind the assigned scores. After this two-week instructional phase, the research team was confident that students had a sound understanding of how to use the analytical model, readying them for the second phase of the intervention.

In the second phase of the intervention, Mike strictly followed the lesson plans developed by the research team, which were divided into four parts: Warming up, Pre-writing, While-writing, and Post-writing. Depending on the needs of each session, the Warm-up stage was used to engage students’ attention, introduce the lesson, or improve the learning atmosphere. The Pre-writing stage provided students with the writing topic, vocabulary, grammar rules, and the structural framework based on the functions of the writing genres. After this, students had a 40-min window to write their essays. Those in the PA group were then tasked with grading their peers’ writing using the analytical model and providing a rationale for their grades, while those in the SA group used the model to assess their own work. During this evaluation phase, Mike would circulate in the classroom, ensuring correct use of the analytical model. If any errors were found, Mike would immediately correct them, ensuring students had a deep understanding of the model.

Post-intervention

Following the 15-week intervention, which included writing instruction and the application of the analytical model for SA and PA, a post-test was administered to evaluate the differential impact of the two interventions on students’ writing performance. The improvement in students’ writing proficiency due to each treatment was determined using the Paired-sample t-test, while the Independent-sample t-test was used to determine if there were any significant differences between the two groups.

Results and discussion

Peer-assessment

Table 2 illustrates the comparative analysis of the effectuation of analytical rubrics in the domain of PA on students’ writing proficiency.

Table 2 PA condition

The study reveals that using analytical rubrics for PA significantly improved students’ writing performance. Comparing pre- and post-test scores across five variables (Content, Organization, Vocabulary, Language Use, and Mechanics), significant improvements were found in Content (mean score increased from 21.60 to 23.28) and Language Use (mean score increased from 16.80 to 19.28). However, no significant changes were observed in Organization, Vocabulary, and Mechanics. The total mean score increased from 72.09 to 76.77, indicating overall improvement in students’ writing performance due to the use of analytical rubrics for PA. Extant literature has underscored the pivotal role of PA in nurturing writing competencies among learners. The process of appraising peers’ work facilitates critical thinking, reflective learning, and heightened engagement in the writing process (Liu & Carless, 2006; Topping, 2009). These studies substantiate the current findings, which manifested a statistically significant amelioration in Content and Language Use when employing analytical schema for PA. Furthermore, the absence of statistically significant improvement in Organization, Vocabulary, and Mechanics aligns with previous research positing that certain facets of writing may necessitate more targeted interventions or explicit instruction to achieve discernible progress (Graham, 2012; Harrison et al., 2015). Consequently, while the utilization of analytical schema for PA contributes positively to students’ writing performance in specific domains, it is crucial to recognize that supplementary strategies may be requisite to address other areas of writing.

Self-assessment

Table 3 depicts the results of the comparison between the utilization of analytical rubrics for SA on the students’ writing competency.

Table 3 SA condition

The findings show that using analytical rubrics for SA significantly improved students’ writing performance across all variables. Comparing pre- and post-test scores, significant improvements were found in Content (mean score increased from 20.70 to 24.79), Organization (14.06 to 17.02), Vocabulary (14.75 to 16.68), Language Use (17.27 to 21.60), and Mechanics (4.24 to 4.66). The total mean score increased from 71.04 to 84.77, indicating a substantial overall improvement in students’ writing performance due to the use of analytical rubrics for SA. Prior studies have persistently underscored the effectiveness of SA in fostering the writing competencies among language learners. SA furnishes educators with the means to inculcate metacognitive perspicacity, self-regulation, and autonomous learning, thereby amplifying their written expertise (Andrade & Brown, 2016; Panadero et al., 2012). These findings match with previous results, showing a significant improvement across all areas when using analytical rubrics for SA. Also, the current results agree with the theory that SA, when supported by clear guidelines and directed teaching, can lead to progress in various aspects of writing (Ross, 2006; Zimmerman & Kitsantas, 2002). The existing results show a noticeable boost in Content, Organization, Vocabulary, Language Use, and Mechanics, supporting the idea that using analytical rubrics for SA, along with a systematic approach, can produce positive effects in various areas of writing.

Comparison between the two conditions

Table 4 manifests the comparative analysis between the results of the two conditions.

Table 4 The comparative analysis between the two conditions

The findings show no significant difference in writing performance between SA (MSA = 71.04) and PA (MPA = 72.09) groups in the pre-test (t-value = -0.89, p-value = 0.78). However, in the post-test, a significant difference was found between SA (MSA = 84.77) and PA (MPA = 76.77) groups (t-value = -3.73, p-value = 0.00), with the SA group showing better performance. In summation, the test results show a significant difference in the writing skills of learners using SA and those using PA when applying analytical rubrics, with the SA group showing a better skill level. This result supports the theory that SA, when carried out in a well-organized and effective manner, can lead to greater improvements in writing performance compared to PA (Ross, 2006; Zimmerman & Kitsantas, 2002). Students who engage in SA may develop a stronger sense of ownership, responsibility, and understanding of their learning path, which ultimately results in improved writing skills (Boud & Falchikov, 2006; Nicol & Macfarlane-Dick, 2006). Reversely, while PA has its benefits, it also has notable drawbacks that might affect the performances of the PA group in this study. Students might not trust their peers’ evaluations, perceiving them as less credible than teachers’ assessments (Zhao, 2018). Furthermore, PA might not assist all learners, especially those with lower proficiency struggling to provide accurate feedback, potentially resulting in biased evaluations (deBoer et al., 2023). Additionally, the assessment could also be influenced by friendship bias, possibly distorting the results (Alqarni & Alshakhi, 2021). Also, PA could demand significant time and effort, potentially adding to the student workload and distracting from other learning activities (Wang et al., 2016). Finally, the fear of negative evaluation could reduce PA’s effectiveness, as students might feel uneasy giving and receiving critical feedback, impairing their learning experience (Panadero & Alqassab, 2019).

The novelty of this study lies in its direct comparison of the effects of SA and PA when both are facilitated by the same analytical rubrics. While previous research has separately delved into the benefits and drawbacks of SA and PA (Alqarni & Alshakhi, 2021; Boud & Falchikov, 2006; deBoer et al., 2023; Nicol & Macfarlane-Dick, 2006; Panadero & Alqassab, 2019; Wang et al., 2016), this study presents a nuanced understanding by observing the two in tandem within a particular EFL context like Vietnam. In recent decades, Vietnam has seen an intensified drive to master the English language, propelled by the forces of globalization and the country’s deepening engagement in international dynamics (Thao & Mai, 2020). However, this aspiration unfolds against a backdrop distinctly marked by Vietnam’s traditional educational paradigms and cultural nuances. Historically, Vietnamese educational approaches have gravitated towards teacher-centered methods, where teachers stand as primary knowledge bearers, and students predominantly engage in passive and rote learning, emphasizing memorization over critical or independent thinking (Thanh, 2010). Against this backdrop, the exploration of alternative assessment methodologies, especially SA and PA, in the realm of EFL emerges as both a challenge and an innovation. The traditional model, deeply rooted in Confucian values, places immense respect on authority and hierarchical relationships. These values could potentially render PA as a delicate tool, where students might grapple with concerns about undermining peers or causing a loss of face (Panadero & Alquassab, 2019). On the other hand, SA emerges as a transformative tool, nudging Vietnamese students towards taking more proactive ownership of their learning, fostering a shift away from traditionally passive modes (Tran & Phan Tran, 2021). This study, situated at this crossroads, offers a pioneering look into the implementation and impact of SA and PA within the Vietnamese EFL setting. It juxtaposes the potential of SA to cultivate ownership, responsibility, and understanding with the challenges embedded in PA—ranging from trust issues and biases to the intricate art of feedback. Through this lens, the current research provided invaluable insights into the nuanced reactions and adaptabilities of Vietnamese learners, ensuring a blend of global pedagogical methodologies and local specificity (Thao & Mai, 2020). In essence, the study’s focus on assessment methodologies for EFL learners in Vietnam underscores its novelty. It not only elucidates the inherent challenges and advantages these methods present within the Vietnamese context but also paves the way for future endeavors to tailor and optimize these assessment tools, resonating with the unique socio-cultural and educational fabric of Vietnam (Van Van, 2020).

Conclusion

This study aimed to assess the impact of using analytical rubrics for SA and PA on the writing skills of EFL learners in Vietnam. The research adopted a controlled experimental design and involved a sample of 44 English-major students, divided into two groups. Each group was instructed on how to use analytical rubrics for either SA or PA over a 17-week instructional period. The findings revealed no statistically significant differences between the SA and PA groups in the pre-test. However, in the post-test, a significant divergence was noted, with the SA group demonstrating enhanced writing competence. Moreover, the study found that both SA and PA had a beneficial effect on students’ writing performance, although SA offered more substantial improvements when implemented effectively.

Implications

The present study contributes to the growing body of research on formative assessment and the use of analytical rubrics to improve learning outcomes. By evaluating the effectiveness of analytical rubrics for SA and PA within the context of EFL teaching and learning in Vietnam, this study offers valuable evidence-based insights that can guide the development and implementation of formative assessment strategies in EFL settings.

The study demonstrates that the use of analytical rubrics in both SA and PA can lead to improvements in students’ essay writing skills. This finding underscores the potential of formative assessment, facilitated by assessment frameworks, to support the development of writing proficiency among EFL learners. Therefore, the study highlights the importance of incorporating formative assessment strategies into EFL curricula to enhance student learning and skill development. However, the data indicates that equipping students with the knowledge to utilize analytical rubrics for SA can lead to significant enhancements in their writing prowess. Given Vietnam’s cultural context, where respect for hierarchy and authority is deeply ingrained, SA emerges as a particularly potent tool. The process of self-evaluation aligns well with the Vietnamese learners’ inclination for introspection and self-improvement, allowing them to assert more control and responsibility over their learning journey without the potential cultural discomforts that might arise from PA, such as fearing undermining peers or facing biases.

Incorporating SA into pedagogical strategies can thus be seen as aligning more harmoniously with the local cultural ethos. Teachers can seamlessly weave this approach into their instructional methodologies, offering students lucid evaluation benchmarks and methodical feedback mechanisms. Such strategies not only amplify the learning experience but also resonate with the inherent cultural fabric, enabling learners to navigate their educational paths with greater autonomy and self-awareness. This prioritization of SA over PA, while being globally informed, remains deeply sensitive to Vietnam’s unique socio-cultural nuances, ensuring that the process of language acquisition is both effective and culturally congruent.

Next, for curriculum developers, the insights from the study indicate the need to incorporate formative assessment strategies, such as SA and PA using analytical assessment tools, into EFL curricula. This integration can ensure that learners receive regular feedback on their writing performance, helping them to identify areas for improvement and develop the necessary skills for effective communication in English. Besides, policymakers can use the insights from the study to guide the development of EFL policies and initiatives that prioritize formative assessment strategies. By promoting the use of SA and PA with analytical rubrics in EFL classrooms, policymakers can advance EFL education and support student success in mastering English as a foreign language.

Limitations and recommendations for further studies

Despite the valuable knowledge provided by this investigation, it is important to recognize its limitations. The relatively small sample size (44 participants) and the fact that all participants were English-major students from a single academic institution in Southwest Vietnam may limit the generalizability of the findings to other populations and EFL contexts. In addition, the study was conducted over a span of 17 weeks, which may not be sufficient to fully understand the long-term effects of using analytical rubrics for SA and PA on students’ essay writing skills. Furthermore, the study focused solely on the impact of analytical rubrics on essay writing proficiency, excluding other language skills such as reading, speaking, and listening, which are critical aspects of EFL education. Another limitation is the customization of the analytical rubrics for the Vietnamese context, which while enhancing its suitability for this particular study, might make the findings less generic for a broader audience. One further limitation worth noting is the potential influence of the distinct essay prompts on the study’s outcomes. The differentiation between the two prompts may have inadvertently affected the final results, potentially skewing comparisons and conclusions drawn. Such distinctions could contribute to varying levels of familiarity, comfort, or engagement among participants, thereby influencing their performances. It must be acknowledged that while measures were taken to ensure consistency, this factor might still have played a role in the observed outcomes.

In order to address these limitations and further our understanding of formative assessment and the use of evaluation frameworks in EFL education, the following recommendations for future research are proposed. First, future studies should use larger and more diverse samples, including participants from different educational levels, geographical locations, and linguistic backgrounds, to enhance the generalizability of the findings. Second, researchers should consider conducting longitudinal studies to examine the long-term effects of using analytical evaluation tools for SA and PA on students’ writing skills and other language abilities. Third, additional research should investigate the impact of analytical evaluation tools on other language skills, such as reading, speaking, and listening, to provide a more comprehensive understanding of the benefits of formative assessment strategies in EFL education. Another vital recommendation for future research would be to assess and compare the efficacy of generic analytical rubrics with those that are customized for specific contexts, like in this study. Such an endeavor would help in discerning whether contextual adaptations significantly enhance or potentially limit the broader applicability of the rubrics in varied EFL environments. Additionally, in future studies, to mitigate the influence of the distinct essay prompts on the study’s outcomes, it would be advisable to utilize a standardized set of essay prompts or to rotate prompts among participants. This would ensure that any performance differences can be attributed more directly to the intervention (in this case, the use of analytical rubrics for SA and PA) rather than the inherent challenges or comforts posed by different prompts.

Furthermore, future studies should strive to control or investigate the influence of potential confounding variables, such as students’ prior knowledge, motivation, or exposure to different teaching approaches, on the effectiveness of using analytical evaluation tools for SA and PA. Importantly, to supplement the quantitative findings, future research could employ qualitative methods, such as interviews, focus groups, or classroom observations, to gain a deeper understanding of students’ experiences with SA and PA using analytical evaluation tools and the factors that may facilitate or hinder their effective implementation in EFL classrooms.

Availability of data and materials

The data and material generated and used in this study are available and retained by the authors. The authors are committed to facilitating access to these resources to promote transparency and scientific progress. Researchers interested in accessing the data and material can contact the corresponding author to initiate the process.

Abbreviations

EFL:

English as a Foreign Language

PA:

Peer assessment

SA:

Self-assessment

SD:

Standard Deviation

References

  • Ai, P. T. N., Nhu, N. V. Q., & Thuy, N. H. H. (2019). Vietnamese EFL teachers’ classroom assessment practice at the implementation of the pilot primary curriculum. International Journal of Language and Linguistics, 7(4), 172–177. https://doi.org/10.11648/j.ijll.20190704.15.

  • Alqarni, T., & Alshakhi, A. (2021). The impact of negotiation as a social practice on EFL writing peer assessment sessions. Theory and Practice in Language Studies, 11(10), 1334–1341. https://doi.org/10.17507/tpls.1110.23.

  • Andrade, H. L., & Brown, G. T. (2016). Student self-assessment in the classroom. In G. Brown & L. Harris (2016) Handbook of human and social conditions in assessment (pp. 319–334). Routledge. https://doi.org/10.4324/9781315749136.

  • Beyreli, L., & Ari, G. (2009). The Use of Analytic Rubric in the Assessment of Writing Performance–Inter-Rater Concordance Study. Educational Sciences: Theory and Practice, 9(1), 105–125.

    Google Scholar 

  • Blanche, P., & Merino, B. J. (1989). Self-assessment of foreign-language skills: Implications for teachers and researchers. Language Learning, 39(3), 313–338. https://doi.org/10.1111/j.1467-1770.1989.tb00595.x.

    Article  Google Scholar 

  • Boud, D., & Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, 31(4), 399–413. https://doi.org/10.1080/02602930600679050.

    Article  Google Scholar 

  • Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment & Evaluation in Higher Education, 38(8), 941–956. https://doi.org/10.1080/02602938.2013.769198.

    Article  Google Scholar 

  • Bryman, A. (2016). Social research methods. Oxford: Oxford University Press.

    Google Scholar 

  • Çakmak, F., Ismail, S. M., & Karami, S. (2023). Advancing learning-oriented assessment (LOA): Mapping the role of self-assessment, academic resilience, academic motivation in students’ test-taking skills, and test anxiety management in Telegram-assisted-language learning. Language Testing in Asia, 13(1), 1–19. https://doi.org/10.1186/s40468-023-00230-8.

    Article  Google Scholar 

  • Can, D. (2019). ESP Teacher’s perceptions and practices of formative assessment: An institutional case study in Vietnam. American Journal of Humanities and Social Sciences Research (AJHSSR), 3(5), 143–148.

    Google Scholar 

  • Carroll, D. (2020). Observations of student accuracy in criteria-based self-assessment. Assessment & Evaluation in Higher Education, 45(8), 1088–1105. https://doi.org/10.1080/02602938.2020.1727411.

    Article  Google Scholar 

  • Çetin, Y. (2011). Reliability of raters for writing assessment: analytic-holistic, analytic-analytic, holistic–holistic. Mustafa Kemal Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 8(16), 471–486. https://dergipark.org.tr/en/pub/mkusbed/issue/19554/208359.

  • Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20(4), 328–338. https://doi.org/10.1016/j.learninstruc.2009.08.006.

    Article  Google Scholar 

  • Cohen, L., Manion, L., & Morrison, K. (2013). Research methods in education. Hoboken.

  • Cook, T. D., Campbell, D. T., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

    Google Scholar 

  • Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks: Sage Publications.

    Google Scholar 

  • Cuesta-Melo, C. H., Lucero-Zambrano, M. A., & Herrera-Mosquera, L. (2022). The Influence of Self-Assessment on the English Language Learning Process of students from a public university in Colombia. Colombian Applied Linguistics Journal, 24(1), 89–104. https://doi.org/10.14483/22487085.17673.

  • Dao, H., & Newton, J. (2021). TBLT Perspectives on Teaching from an EFL Textbook at a Vietnam University. Canadian Journal of Applied Linguistics, 24(2), 99–126. https://doi.org/10.37213/cjal.2021.31371.

  • deBoer, M., Leontjev, D., & Friederich, L. (2023). From language to function: Developing self-and peer-assessment tools. ELT Journal, 77(1), 94–104. https://doi.org/10.1093/elt/ccac014.

    Article  Google Scholar 

  • Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education. New York: McGraw-hill.

    Google Scholar 

  • Graham, S. (2012). Introduction to special issue on writing assessment and instruction. Exceptionality, 20(4), 197–198. https://doi.org/10.1080/09362835.2012.724622.

    Article  Google Scholar 

  • Hamp-Lyons, L., & Henning, G. (1991). Communicative writing profiles: An investigation of the transferability of a multiple-trait scoring instrument across ESL writing assessment contexts. Language Learning, 41(3), 337–373. https://doi.org/10.1111/j.1467-1770.1991.tb00610.x.

    Article  Google Scholar 

  • Harris, L. R., & Brown, G. T. (2018). Using self-assessment to improve student learning. New York: Routledge.

    Book  Google Scholar 

  • Harrison, C. J., Könings, K. D., Schuwirth, L., Wass, V., & Van der Vleuten, C. (2015). Barriers to the uptake and use of feedback in the context of summative assessment. Advances in Health Sciences Education, 20, 229–245. https://doi.org/10.1007/s10459-014-9524-6.

    Article  Google Scholar 

  • Jacobs, H. L., Zingraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: a practical approach. Rowley: Newbury House.

  • Jones, L., Allen, B., Dunn, P., & Brooker, L. (2017). Demystifying the rubric: A five-step pedagogy to improve student understanding and utilisation of marking criteria. Higher Education Research & Development, 36(1), 129–142. https://doi.org/10.1080/07294360.2016.1177000.

    Article  Google Scholar 

  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. https://doi.org/10.1016/j.edurev.2007.05.002.

    Article  Google Scholar 

  • Le, T. T., & Chen, S. (2018). Globalisation and Vietnamese foreign language education. In English tertiary education in Vietnam (pp. 16–27). New York: Routledge.

    Chapter  Google Scholar 

  • Le, X. M., Phuong, H. Y., Phan, Q. T., & Le, T. T. (2023). Impact of Using Analytic Rubrics for Peer Assessment on EFL Students’ Writing Performance: An Experimental Study. Multicultural Education, 9(3), 41–53. https://doi.org/10.5281/zenodo.7750831.

    Article  Google Scholar 

  • Li, M., & Zhang, X. (2021). A meta-analysis of self-assessment and language performance in language testing and assessment. Language Testing, 38(2), 189–218. https://doi.org/10.1177/0265532220932481.

    Article  Google Scholar 

  • Li, L., Liu, X., & Zhou, Y. (2012). Give and take: A re-analysis of assessor and assessee’s roles in technology-facilitated peer assessment. British Journal of Educational Technology, 43(3), 376–384. https://doi.org/10.1111/j.1467-8535.2011.01180.x.

    Article  Google Scholar 

  • Liu, N. F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher Education, 11(3), 279–290. https://doi.org/10.1080/13562510600680582.

    Article  Google Scholar 

  • Lundstrom, K., & Baker, W. (2009). To give is better than to receive: The benefits of peer review to the reviewer’s own writing. Journal of Second Language Writing, 18(1), 30–43. https://doi.org/10.1016/j.jslw.2008.06.002.

    Article  Google Scholar 

  • McCarthy, P., Meier, S., & Rinderer, R. (1985). Self-efficacy and writing: A different view of self-evaluation. College Composition and Communication, 36(4), 465–471. https://doi.org/10.2307/357865.

    Article  Google Scholar 

  • McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40–49. https://www.jstor.org/stable/42923742.

  • Mustafa, F., & Yusuf, Y. Q. (2022). Workshop Activity Module in E-Learning for Maximum Vocabulary Exposure in an EFL Classroom. Computer-Assisted Language Learning Electronic Journal (CALL-EJ), 23(2), 6–17.

  • Nguyen, T. H. H., & Truong, A. T. (2021). EFL Teachers’ Perceptions of Classroom Writing Assessment at High Schools in Central Vietnam. Theory and Practice in Language Studies, 11(10), 1187–1196. https://doi.org/10.17507/tpls.1110.06.

  • Nguyen, T. V. L. (2011). Project-based learning in teaching English as a foreign language. VNU Journal of Foreign Studies, 27(2), 140–146. https://js.vnu.edu.vn/FS/article/view/1476.

  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090.

    Article  Google Scholar 

  • Ölmezer-Öztürk, E., & Aydin, B. (2019) Investigating Language Assessment Knowledge of EFL Teachers. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 34(3), 602–620. https://doi.org/10.16986/huje.2018043465.

  • Pallant, J. (2020). SPSS survival manual: A step by step guide to data analysis using IBM SPSS. London: McGraw-hill Education (UK).

    Book  Google Scholar 

  • Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 44(8), 1253–1278. https://doi.org/10.1080/02602938.2019.1600186.

    Article  Google Scholar 

  • Panadero, E., Tapia, J. A., & Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learning and Individual Differences, 22(6), 806–813. https://doi.org/10.1016/j.lindif.2012.04.007.

    Article  Google Scholar 

  • Panadero, E., Jonsson, A., & Strijbos, J. W. (2016). Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In Laveault, D., Allal, L. (eds), Assessment for learning: Meeting the challenge of implementation (pp. 311–326). Springer International Publishing. https://doi.org/10.1007/978-3-319-39211-0_18.

  • Pham, T. (2016). Student-centredness: Exploring the culturally appropriate pedagogical space in Vietnamese higher education classrooms using activity theory. Australian Journal of Teacher Education (Online), 41(1), 1–21. https://bit.ly/3KOhMDM.

  • Phuket, P. R. N., & Othman, N. B. (2015). Understanding EFL Students’ Errors in Writing. Journal of Education and Practice, 6(32), 99–106.

    Google Scholar 

  • Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859.

    Article  Google Scholar 

  • Ritonga, M., Tazik, K., Omar, A., & Saberi Dehkordi, E. (2022). Assessment and language improvement: The effect of peer assessment (PA) on reading comprehension, reading motivation, and vocabulary learning among EFL learners. Language Testing in Asia, 12(1), 36. https://doi.org/10.1186/s40468-022-00188-z.

    Article  Google Scholar 

  • Ross, S. (1998). Self-assessment in second language testing: A meta-analysis and analysis of experiential factors. Language Testing, 15(1), 1–20. https://doi.org/10.1177/026553229801500101.

    Article  Google Scholar 

  • Ross, J. A. (2006). The reliability, validity, and utility of self-assessment. Practical Assessment, Research, and Evaluation, 11(1), 10. https://doi.org/10.7275/9wph-vv65.

    Article  Google Scholar 

  • Spiller, D. (2012). Assessment matters: Self-assessment and peer assessment. The University of Waikato, 13, 2–18.

    Google Scholar 

  • Taylor, C., Brown, K., Lamb, B., Harris, J., Sevdalis, N., & Green, J. S. A. (2012). Developing and testing TEAM (Team Evaluation and Assessment Measure), a self-assessment tool to improve cancer multidisciplinary teamwork. Annals of Surgical Oncology, 13(19), 4019–4027. https://doi.org/10.1245/s10434-012-2493-1.

    Article  Google Scholar 

  • Thanh, P. T. H. (2010). Implementing a student-centered learning approach at Vietnamese higher education institutions: Barriers under. Journal of Futures Studies, 15(1), 21–38.

    Google Scholar 

  • Thao, L. T., & Mai, L. X. (2020). English language teaching reforms in Vietnam: EFL teachers’ perceptions of their responses and the influential factors. Innovation in Language Learning and Teaching, 16(1), 29–40. https://doi.org/10.1080/17501229.2020.1846041.

    Article  Google Scholar 

  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/00346543068003249.

    Article  Google Scholar 

  • Topping, K. J. (2009). Peer assessment. Theory into Practice, 48(1), 20–27. https://doi.org/10.1080/00405840802577569.

    Article  Google Scholar 

  • Tran, T. Q., & Phan Tran, T. N. (2021). Vietnamese EFL High School Students’ Use of Self-Regulated Language Learning Strategies for Project-Based Learning. International Journal of Instruction, 14(1), 459–474. https://doi.org/10.29333/iji.2021.14127a.

  • Van Van, H. (2020). The roles and status of English in present-day Vietnam: A socio-cultural analysis. VNU Journal of Foreign Studies, 36(1), 1–21. https://doi.org/10.25073/2525-2445/vnufs.4495.

  • Wang, Y., Liang, Y., Liu, L., & Liu, Y. (2016). A multi-peer assessment platform for programming language learning: Considering group non-consensus and personal radicalness. Interactive Learning Environments, 24(8), 2011–2031. https://doi.org/10.1080/10494820.2015.1073748.

    Article  Google Scholar 

  • Weigle, S. C. (2002). Assessing writing. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Wride, M. (2017). Guide to peer-assessment. Academic Practice.

  • Yamanishi, H., Ono, M., & Hijikata, Y. (2019). Developing a scoring rubric for L2 summary writing: A hybrid approach combining analytic and holistic assessment. Language Testing in Asia, 9(1), 1–22.

    Article  Google Scholar 

  • Yan, Z., Chiu, M. M., & Ko, P. Y. (2020). Effects of self-assessment diaries on academic achievement, self-regulation, and motivation. Assessment in Education: Principles, Policy & Practice, 27(5), 562–583. https://doi.org/10.1080/0969594X.2020.1827221.

    Article  Google Scholar 

  • Zhao, H. (2018). Exploring tertiary English as a Foreign Language writing tutors’ perceptions of the appropriateness of peer assessment for writing. Assessment & Evaluation in Higher Education, 43(7), 1133–1145. https://doi.org/10.1080/02602938.2018.1434610.

    Article  Google Scholar 

  • Zimmerman, B. J., & Kitsantas, A. (2002). Acquiring writing revision and self-regulatory skill through observation and emulation. Journal of Educational Psychology, 94(4), 660–668. https://doi.org/10.1037/0022-0663.94.4.660.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to express their sincere appreciation and gratitude to the 44 students who participated in this research study. Their invaluable contribution and willingness to participate have been instrumental in the successful completion of this project.

Funding

This research project was conducted without any external funding. The authors did not receive financial support or grants from any organizations, institutions, or agencies for the execution of this study. The research was self-funded by the authors, who independently covered any associated costs, including data collection, materials, equipment, and analysis.

Author information

Authors and Affiliations

Authors

Contributions

The authors of this study have equally contributed to the research, with shared responsibilities across all aspects of the project.

Corresponding author

Correspondence to Thanh Thao Le.

Ethics declarations

Competing interests

Authors have no competing interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Phuong, H.Y., Phan, Q.T. & Le, T.T. The effects of using analytical rubrics in peer and self-assessment on EFL students’ writing proficiency: a Vietnamese contextual study. Lang Test Asia 13, 42 (2023). https://doi.org/10.1186/s40468-023-00256-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40468-023-00256-y

Keywords