Skip to main content

Examining classroom writing assessment literacy: a focus on in-service EFL teachers in Iran

Abstract

Writing assessment literacy (WAL) has received research attention over the past few years. This study aimed at investigating writing assessment knowledge of Iranian English language teachers along with their conceptions and practices of writing assessment based on Crusan et al.’s (Assessing Writing 28:43–56, 2016) study in order to have a better understanding of their current situation and to predict and accommodate for future writing assessment requirements. The study further aimed at examining how teachers’ knowledge, conception, and practice of writing assessment are influenced by contextual and experiential factors. To accomplish this goal, a test of writing assessment knowledge together with an adapted version of a questionnaire developed by Crusan et al. (Assessing Writing 28:43–56, 2016) for writing assessment conceptions and practice was distributed among 120 Iranian in-service teachers selected based on convenience sampling. The results of the study have shown inadequate levels of writing assessment knowledge for participating teachers. With regard to the conceptions of writing assessment, majority of the participants valued innovative assessment methods like portfolio and self/peer assessment methods although in practice, they rarely used these methods. The study revealed no impact of teaching experience and context on teachers’ writing assessment knowledge and practice. The findings of this study contribute to our current understanding of WAL development and provide a more accurate picture of writing assessment training needs of Iranian teachers and the development of more efficient teacher education courses.

Introduction

Good assessment practice is an integral component of effective writing instruction (Beck et al., 2018). It is claimed that teachers’ abilities as well as their motivation to teach writing are inextricably linked to their assessment abilities (Dappen et al., 2008). Hence, assessing students’ writing constitutes a significant and challenging part of second/ foreign language (L2) writing teachers’ workloads (Crusan, 2010). To assist L2 learners develop their writing skills, teachers need to recognize the challenges and problems students encounter in writing and make use of that information to develop or modify their instruction (Crusan et al., 2016). Since Black and Wiliam’s (1998) argument regarding the contribution of classroom-based assessment in students’ learning, assessment has been growingly used for its learning purpose over its summative and grading functions. There has been an international movement to reform assessment practices in order to meet challenging societal, economic, and technological requirements in the 21st century (Genesee & Hamayan, 1994). According to Lee (2017), assessment of learning (AoL), which predominantly focuses on writing performance and scoring, has dominated classroom writing assessment in L2 contexts. Having its roots in the behaviorist theory of learning, AOL holds that students’ learning can be quantified by means of objective data (Shepard, 2000). AoL comes under the category of summative assessment and lay emphasis on the ultimate written product and pays little attention to the writing and learning process (Hamp-Lyons, 2007). The main purpose of AoL is to provide students with scores which generally have high stakes, including moving to the next grades or receiving a certificate. This type of assessment is the dominant assessment orientation in many English as a foreign language (EFL) contexts, including Iran where the educational system is centralized and exam-oriented with teachers as authorities who transmit information and students as a passive recipient of that information (Carless, 2011; Lee, 2017; Naghdipour, 2016). This traditional conception of assessment has been challenged with the advent of constructivist and sociocultural theories that opened up new insights into the nature of learning and aimed to bring about graduates who were skilled at critical thinking, problem-solving, and efficient communication strategies. According to these theories, learning is constructed by learners through social and cultural interactions and assessment is an integral part of teaching that aims not only to measure students’ learning but also to monitor and improve their learning (Lantolf & Poehner, 2004).

Classroom assessments are supposed to draw accurate inferences about students’ achievement, communicate that information to the students and other stakeholders, and plan further instruction (Price et al., 2014). Classroom writing assessment is more than assessing the quality of a text at the end of a classroom assignment or a writing course; rather, it is a continuing process which is essential to teaching and learning and should not be used for just grading purposes (Crusan & Matsuda, 2018). While grading cannot be completely discarded as most institutes need final grades at the end of a course, teachers should think of classroom writing assessment as an instrument for developing students’ autonomy, for recognizing areas of weaknesses and strengths, and for planning future instruction (Plakans & Gebril, 2015). In a writing classroom, AFL means that learners play a significant role in their own learning, are aware of the learning goals, and are provided with adequate feedback and opportunities to practice (Lee & Coniam, 2013). Feedback, especially, has an important role in AFL, through which teachers get to know students’ areas of strengths and weaknesses and take required measures to close the gap between their present and desired performance—i.e., the zone of proximal development (Vygotsky, 1978). While both AoL and AfL are important in writing classes, traditionally AoL has dominated writing classes and AfL is only beginning to draw scholars’ attention, arguing that students would perform better if formative and diagnostic uses of assessment are given priority in the writing classroom (Lee, 2017).

Out of the need to provide extensive evidence for students’ learning and to employ a variety of assessment techniques in classroom, including teacher–student conferences and peer/self assessment, the concept of language assessment literacy (LAL) took on increasing significance. In line with the introduction of LAL, scholars attempted to clarify the concept and explain what assessment literacy meant to language teachers. While earlier studies of LAL took a componential view and focused on clarifying the components of knowledge base that teachers need to know in order to be considered assessment literate (e.g. Brindley, 2001; Davies, 2008; Inbar-Lourie, 2008), more recent studies have shifted to developmental and multi-dimensional views of LAL (e.g., Fulcher, 2012; Taylor, 2013; Xu & Brown, 2016). In this regard, Fulcher (2012) put forth a definition of LAL as:

The knowledge, skills, and abilities required to design, develop, maintain, or evaluate, large-scale standardized and/or classroom based tests, familiarity with test processes, and awareness of principles and concepts that guide and underpin practice, including ethics and codes of practice. that was placed within a larger historical, social, political, and ethical context (Fulcher, 2012, P. 125).

His definition of LAL carefully indicated a broad range of components of the LAL construct, but it did not point out the degree and the depth of contributions of those components regarding the nature of the stakeholders’ participation in the assessment process. Later, Taylor (2013) developed a framework of LAL consisting of eight components: (1) knowledge of theory, (2) technical skills, (3) principles and concepts, (4) language pedagogy, (5) sociocultural values, (6) local practices, (7) personal beliefs, and (8) scores and decision-making (p. 410). Taylor (2013) argued that “different stakeholder groups need differential levels of assessment literacy according to their specific roles and responsibilities” (P.408). Taylors’ (2013) LAL profile is significant because it took the stakeholders’ attitudes into account in order to explain the complexity of teachers’ assessment decision-making and it clarified the type and depth of each component for four different assessment stakeholders, including test writers, classroom teachers, university administrators, and professional language testers by mapping the framework onto Pill and Harding (2013) five-stage continuum. In view of that, classroom teachers are required to acquire more about pedagogical aspects of assessment and are less required to learn about assessment theory or measurement concepts.

Adopting a sociocultural approach to LAL and translating it into L2 writing assessment context, Crusan et al. (2016) brought about the concept of writing assessment literacy (WAL) as specific assessment literacy needs for L2 writing teachers for carrying out routine classroom writing assessment procedures and to make a sound educational judgment based on students’ performance. By drawing on Borg’s (2003) theory of teacher cognition, Crusan et al. (2016) defined WAL as teachers’ knowledge, beliefs, and practices of writing assessment, affected by contextual and experiential factors. They argued that in addition to the content, considerations of L2 teachers’ WAL should emphasize “how this content is enmeshed with teachers’ knowledge, beliefs, and practices” (Crusan et al., 2016, p. 45). According to sociocultural theories, knowledge cannot be merely transmitted to learners by others; rather, they are socially constructed through practices (Vygotsky, 1978). Within this conceptualization, although mastery of assessment base knowledge is considered necessary for teachers’ LAL development, it is not sufficient because teachers have developed their own conception system with regard to assessment through the experience they have had as learners when they observed their teachers’ work during school or undergraduate years (Phipps & Borg, 2009). So novice teachers are more likely to draw upon their own apprenticeship of observation and utilize assessment procedures they are acquainted with (Lortie, 1975). Although learning assessment base knowledge is significant, it does not necessarily make language teachers’ assessment literate. The contents of the knowledge base taught to teachers are influenced and interpreted by teachers’ conceptions of assessment which is the belief they hold towards assessment and function as a filter that interprets new information and determines the type of information that is accepted (Brown, 2004, 2008). Once teachers develop their own belief about teaching, learning, assessment, about students, their positions as teachers, and about the instructional materials, this belief system influences their uptake of assessment knowledge and adoption of assessment procedures (Brookhart, 2011; Fulmer et al., 2015; Opre, 2015). The other main component of LAL is assessment practices, which refer to the type of assessments procedure teachers actually use in classroom and is related to teachers’ knowledge base and their assessment conceptions. In addition to teachers’ knowledge base and conception, contextual and experiential factors have a major influence on teachers’ assessment practice as assessment policies at the national or institutional level confine their practices or require them to engage in assessment practices that are in conflict with their own conceptions (Xu & Brown, 2016; Xu & Liu, 2009).

Review of literature

Since Weigle (2007) calls for attention to L2 writing teachers’ assessment education, writing scholars have conducted studies to bridge the gap between writing teachers’ assessment preparation and teacher education programs. She regretted that while good assessment practices are critical to the teaching of L2 writing, it is not usually taught to pre-service English language teachers and most TESOL programs do not even include an assessment course; as a result, many L2 writing teachers start their career without an in-depth knowledge in writing assessment issues. In this regard, Dempsey et al. (2009) investigated pre-service teachers’ knowledge of writing assessment and underscored the need to train pre-service teachers to learn making informed and detailed judgments about student writing. In their studies, Crusan et al. (2016) found that while more than half of the participants received training in writing assessment and regarded themselves as being competent in writing assessments, they did not have enough confidence in their assessment practices and some of them were not capable in the development or use of assessment rubrics and criteria. Lam (2019) investigated the Hong Kong secondary school teachers’ knowledge, conceptions, and practices regarding writing assessment through a questionnaire, interviews, and observations. The result of the study revealed that the participants had basic writing assessment knowledge and had positive conceptions about alternative writing assessments.

In Iran, studies on LAL has increasingly attracted researchers’ attention. Prior studies demonstrate that there are concerns with teachers’ level of LAL (e.g. Abbasian & Koosha, 2017; Afsahri & Heidari Tabrizi, 2017; Afshar & Ranjbar, 2021; Ahmadi & Mirshojaee, 2016; Rezaei Fard & Tabatabaei, 2018). Teachers’ LAL has been revealed to have significant associations with students’ writing achievement (Ahmad, 2020; Mellati and Khademi (2018). The inclusion of writing sections in most large-scale proficiency tests such as IELTS and TOEFL has dramatically raised the assessment responsibilities of L2 writing teachers in Iran. More and more students are attending exam preparation classes and teachers are growingly required to develop their pedagogical and assessment capability to prepare students for the exam and to interpret the results. Despite this urgency, previous studies have revealed that writing teachers follow traditional, teacher-centered, and product-based approaches to writing assessment (Birjandi & Hadidi Tamjid, 2012; Marefat & Heydari, 2018; Naghdipour, 2016; Rahimi, 2009). Naghdipour (2016) stated that writing teachers follow traditional approaches to L2 writing in classrooms and “incorporating formative assessment tools, collaborative tasks, portfolio writing, and other process-and genre-based strategies were among activities absent from them a majority of writing classes” (p. 85). Emphasizing that current L2 programs are not related to real-life writing needs of the Iranian students, Marefat and Heydari (2018) referred to inadequate teacher preparation as one of the main challenges. They maintained that “Teacher training should be developed to improve teachers’ agency and enable them to act as agents of change in the field of writing instruction. Given proper education and agency, in EFL contexts Iran teachers can offer innovative alternatives and suggest modifications based on local necessities” (Marefat & Heydari, 2018, P. 79). Nemati et al. (2017) investigated the writing proficiency, writing assessment ability, and written corrective feedback beliefs and practices of Iranian English teachers and found that teachers’ writing proficiency did not satisfy the expectations and standards and their writing assessment ability was also not accurate. Soltanpour and Valizadeh (2019) investigated the difference between writing assessment knowledge, beliefs, and training needs of TEFL and the Non-TEFL teachers in Iran and found a significant difference between the participants’ major and their assessment beliefs as well as the type of assessment methods they employ. Ataie-Tabar et al. (2019) focused on socio-cultural aspects of WAL and investigated the extent to which students and teachers considered writing assessment as a student-centered activity that promotes learning. The result of the study revealed that Iranian EFL teachers face challenges to successfully conduct a sociocultural approach to writing assessment in classrooms and they require to develop their knowledge about student-centered writing assessment.

As indicated by the literature, the topic of Iranian L2 teachers’ WAL is in its infancy. The abovementioned studies on L2 writing teachers in Iran have shed light on some important aspects of teachers’ WAL and have meaningful implications for teacher education programs. They emphasized an urgent need for the provision of proper training for writing assessment and instruction in teacher education courses. More studies are required to estimate writing teachers’ current status and their training needs, and to understand where and how to begin a professional training course to improve teachers’ WAL. Crusan (2022) refers to the research gap in WAL as the major impediment to the proper implementation of writing assessment. Regarding a call for conducting more WAL research to draw context-specific conclusions (Crusan, 2022; Crusan et al., 2016), dissatisfaction with the current LAL of Iranian teachers, and Iranian writing teachers’ lack of predisposition to incorporate innovative assessment techniques, this study aims to investigate L2 writing teachers’ knowledge, conception, and practice in Iran. Due to the contribution of the AfL movement to students’ learning, teacher views and practice of AfL approaches in writing class and its alignment with AoL in exam-oriented education contexts such as Iran would be of great significance (Lam, 2019). In EFL contexts, writing assessment is quite significant for developing and improving the teaching and learning quality of writing (Crusan et al., 2016 ; Lee, 2017), so investigating the extent to which teachers possess content knowledge of writing assessment and to what extent they can apply this knowledge in practical domains and if their conception of assessment a is facilitating or inhibiting factor in employing innovative assessment techniques would contribute to the development of better training courses. In view of the abovementioned justifications, the present study was conducted, drawing on the definition of L2 teachers’ WAL by Crusan et al. (2016) and Taylor’s (2013) LAL framework to investigate Iranian L2 writing teachers’ knowledge, conception, and practices of assessment in classroom. In particular, the study was going to find the answer to the following questions:

  1. 1.

    What are Iranian EFL teachers’ levels of writing assessment knowledge?

  2. 2.

    To what extent is the writing assessment knowledge of Iranian EFL teachers compatible with the standards of WAL? What are the areas of match and mismatch between the WAL of Iranian EFL teachers with the standards of WAL?

  3. 3.

    What are Iranian EFL teachers’ conceptions of writing assessment?

  4. 4.

    What are Iranian EFL teachers’ common writing assessment practices?

  5. 5.

    What is the impact of context and experience on writing assessment knowledge, conception, and practice?

Methods

Participants

The participants in this study were 120 in-service EFL teachers who were selected through convenience sampling. To ensure that if we had reached our intended audience, i.e., L2 writing teachers, one item of the questionnaire asked if they have been teaching a writing course when they participated in the study. So respondents who answered no to this item were discarded. Table 1 shows the demographic characteristics of the EFL teachers who completed the test, assessment, and questionnaire. As represented in Table 1, 53.3% of participants were working at an English private institute and nearly 46.7% of them were teaching at a university. The majority of them were in the age range of 26–35. Regarding academic degree, almost 10.9 of respondents held a BA degree, 61.8 had an MA degree, and 27.3 had a PhD. In terms of work experience, most participants had been working between 6 and 10 years.

Table 1 Demographic attributes of the participants (%)

Procedure and instruments

The first underlying component of WAL to be considered in this study is the assessment knowledge base. Teachers need to have a sound assessment knowledge base to implement high-quality assessments. Despite the existence of some instruments to examine language teachers’ LAL (e.g., Farhady & Tavassoli, 2018; Tao, 2014), there is no single test to measure teachers’ WAL in particular. The dominant methodology in WAL research has been the survey approach, drawing self-reported knowledge (Crusan et al., 2016; Lam, 2019). Self-report instruments have been shown to provide less accurate information of teachers’ assessment knowledge because they are based on participants’ perceptual interpretation and there is the possibility that the teachers take a more positive view regarding their assessment knowledge (e.g. Chapman, 2008; Farhady & Tavassoli, 2018; Zhang, 1996). Consequently, within the general or language education literature objective knowledge test has been proposed to provide a better indication of teachers’ knowledge as they directly measure the levels of their assessment knowledge base (e.g. Farhady & Tavassoli, 2018; Mertler, 2003; Ölmezer-Öztürk & Aydin, 2018; Plake, 1993; Tao, 2014). In response, this study aimed to investigate writing teachers’ assessment knowledge by means of a knowledge test. A comprehensive research was conducted in multiple stages in Iran to develop a writing assessment knowledge test after a preliminary stage of identifying EFL teachers’ needs with regard to writing assessment (Tayyebi & Abbasabadi, 2020). After several stages of revision and validation, a localized multiple choice test with 25 items was developed (forthcoming). Additionally, an adapted version of questionnaire developed by Crusan et al. (2016) was used to provide insights into participants’ writing assessment conceptions and practices. Given that the respondents were English language instructors, the test and the questionnaire were both written in English. The final instrument included 9 multiple choice items related to participants’ demographic information, 25 multiple choice questions related to the writing assessment knowledge, and 39 items on the Likert scale related to writing assessment conception and practice. The final version was piloted with 40 EFL teachers. After required changes were made, the instrument was distributed among 400 EFL teachers who were selected based on convenience sampling via email and WhatsApp which is a popular social network in Iran. Of these, 120 questionnaires were accepted because they were both completely filled out and the participants were identified as writing teachers as well. Cronbach’s alpha of the instrument was 0.78.

Data analysis

After the test was distributed among participants, the obtained data was analyzed by SPSS software version 24 using a variety of methods. The obtained data from the test and questionnaire were analyzed using descriptive statistics such as frequencies, means, and standard deviation. One-way analysis of variance (ANOVA) and independent T test were used to determine the differences in teachers’ writing assessment knowledge and Kruskal Willis and Mann-Whitney tests were used to determine the difference in teachers’ conception and practice with respect to teachers’ experience and context.

Results

EFL teachers’ WAL level

The scores of the participants (N = 120) on the WAL test ranged from 7 to 21 with an average of 13.74 and a standard deviation of 4.76. The least score on the entire exam was 7, indicating that some teachers could only answer 7 items out of the 25 and the maximum score was 21, indicating that none of the teachers could correctly answer all of the items. The result verifies the writing teachers' low to moderate performance on the test.

Compatibility of writing assessment knowledge of Iranian EFL teachers with the standards of WAL

Since Taylor’s (2013) assessment literacy framework inspired the design of the writing assessment knowledge test, the results were analyzed and corresponded using her framework. As mentioned earlier, Taylor (2013) developed a LAL profile for four major categories of language assessment stakeholders: test writers, university administrators, professional language testers, and classroom teachers. According to the profile, knowledge of “language pedagogy” is the most significant component for classroom teachers, while knowledge of “scores and decision-making” as well as “principles and concepts” of language assessment are the least important ones. The framework is useful because it maps the degree of required knowledge on different components of LAL onto Pill and Harding’s (2013) continuum which indicated that LAL is developmental and evolves in stages from illiteracy and goes on to nominal, functional, procedural, and multidimensional literacy. Based on this continuum, functional literacy means sound understanding of basic terms and concepts, procedural and conceptual literacy means understanding central concepts of the field and using that knowledge in practice, and multidimensional literacy means knowledge extending beyond ordinary concepts including philosophical, historical, and social dimensions of assessment. Taylor (2013) combined Pill and Harding’s developmental scale with the synthesized LAL profile to outline particular degrees of knowledge necessary across LAL dimensions for different assessment stakeholders. According to this profile, the required LAL dimension for language teachers is as follows: for the components of “knowledge of theory,” “principles and concepts,” and “scores and decision-making,” teachers are required to have functional literacy, for the component of “technical skills” they need to have procedural and conceptual literacy, and for the component of “language pedagogy” they need to attain multidimensional literacy. To locate the gaps, the current levels of assessment knowledge of writing teachers were compared with the levels specified by Taylor (2013). The elicited literacy levels in four different thematic areas are shown in Table 2. As you can see, the mean level of the thematic areas answered by respondents ranged from 44.0 for scoring and decision-making to 56.0 for the thematic area of pedagogical skill.

Table 2 Literacy levels in four different thematic areas of writing assessment

As shown in Fig. 1, the thematic areas displayed in black are the knowledge level of the participants and the areas displayed in grey are the specified level by Taylors’ framework. The figure indicates that EFL writing teachers are functionally literate in “pedagogical skills’ and are below functional level in “technical skills,” “principles and concepts,” and “scores and decision-making.” As such, teachers are relatively underdeveloped in all four areas in comparison with the levels proposed by Taylor. In “scoring and decision-making” as well as “principles and concepts” areas, they are supposed to be functionally literate but the result indicated they are not. For the thematic areas of “technical skills,” they are required to have procedural and conceptual literacy while the data indicated that they are below functional level. In “language pedagogy” they are supposed to have multidimensional literacy but the results indicated that writing teachers are functionally literate in that area.

Fig. 1
figure 1

Comparison between Taylor’s profiles and the obtained profile

Conceptions of writing assessment

Teachers’ assessment conceptions constitute an important part of WAL (Crusan et al., 2016; Xu & Brown, 2016). To find about writing teachers’ conceptions, items 9–22 in the questionnaire were directed towards conceptions teachers have about different aspects of writing assessment. The items addressed the teachers’ conceptions in two areas: conceptions about different writing assessment methods (questions 22, 23, 24, 28, 30, 32, 33, 37) and generic conception about writing assessment (questions 29, 34, 35, 36, 39, 40). When questioned about their feelings regarding writing assessment, 51% of the participants found writing assessment interesting and challenging, while 18% accepted it as a necessary part of their job and %31 said they would rather do something else (Table 3). In terms of types of assessment method, portfolio assessment (90%) was perceived to be the most important assessment method followed by essay exams (77.50%) and self/peer assessment (77%). Around 53.00% believed that writing assessment improves learning and teaching of writing. With regard to scoring the writing assessment, 58.5% believed that writing assessment was subjective and 55% thought that the results of the writing assessment are not trustworthy. 62.5% believed that all writing teachers need to pass the rater training course. When asked about using scoring rubrics in writing classes, only 41.0% believed in the significance of providing specific criteria and rubric and 40% believed that assessment criteria should be shared with students. 42.5% believed that in assessing writing, content should receive more weight than grammatical accuracy. When asked whether they felt confident while assessing writing, 75% agreed that they were. Furthermore, 90% of the participants believed assessment is an important capability that writing teachers should master. Teachers’ conception of writing assessment is represented in Fig. 2.

Table 3 Teachers’ conception of writing assessment
Fig. 2
figure 2

Teachers’ conception of writing assessment

Common writing assessment practice

To answer question four, the last part of the questionnaire that looked into participants’ assessment practices in writing classes was analyzed. As shown in Table 4, 50.00% participants use take-home essay writing followed by in-class essay writing (45.00%). Around 30% of the participants said they always use multiple choice grammar quizzes.

Table 4 Teachers’ practice of writing assessment

The results reveal that participants less frequently use innovative assessment methods: self and peer assessment (40.00%), small group writing projects (23.50%), portfolio assessment (22.00%), student journals (27.50%), student-teacher conference (27%). Only 27. % said that they use multiple drafting in their writing sessions. Around 42.50% participants said they use rubric and 40.00% of them said they share the rubric with students. When asked if they use a specific rubric for every assignment, the same 43.5% indicated they always do and a round 39.5% of respondents train their students on how to apply assessment criteria to their own and their peers' work, ensuring that their students comprehend the information included in the rubrics. When asked whether they provide students with written qualitative feedback, 52% report doing so, and from this, only 31% check to see if students have read their comments and have applied them to the next draft. 37.5 % of the respondents mentioned that they integrate writing with other skills and 50.00% said they use computer technology in writing assessment. Teachers’ assessment practice is displayed in Fig. 3.

Fig. 3
figure 3

Teachers’ practice of writing assessment

The impact of contextual and experiential factors on WAL

All data were tested for normality using the Kolmogorov-Smirnov test and the data set indicated a normal distribution for the knowledge test. To investigate the probable impact of teaching context and experience on teachers’ knowledge, t test and ANOVA were applied. The first comparison was made to see the difference between writing assessment knowledge of teachers based on the context they are working in, i.e., university and institute. Table 5 shows the independent samples t-test results on the knowledge test scores. The analysis showed that the difference between the performance of the two groups of teachers on the WAL test was not significant (Sig.= 0.505, F = 0.447).

Table 5 Independent T test of teachers at different teaching contexts

Afterward, a one-way ANOVA was performed to see if there was a significant difference in teachers' knowledge test scores with regard to teaching experience and the result showed no significant difference. The results are displayed in Table 6.

Table 6 One-way ANOVA of teachers at different experiences

In order to examine the effects of teaching context (university vs. institutes) on writing assessment conception and practice, a Mann-Whitney test was conducted (Table 7). The results of the analysis revealed that the context of teaching significantly affected teachers’ conception (Sig=0.007) but it did not affect assessment practice (Sig= 0.485).

Table 7 Mann-Whitney test for teaching context

A Kruskal-Wallis test was used to look at the impact of four levels of teaching experience on participants’ writing assessment conception and practice. The analysis yielded significant differences on teachers’ writing assessment conception (Sig=0.012) (Table 8). However, no significant differences were obtained for the assessment practice (Sig= 0.083) (Table 9).

Table 8 Kruskal-Wallis test for teaching experience and assessment conception
Table 9 Kruskal-Wallis test for teaching experience and assessment practice

Discussion

The purpose of this study was to explore in-service EFL teachers’ writing assessment knowledge, conceptions, and practice which underpin writing classroom assessment literacy (Crusan et al., 2016). The results revealed that the participants did not have an adequate level of writing assessment knowledge base. These findings support Farhady and Tavassoli’s (2018) study, which indicated that the majority of the Iranian English teachers had low levels of language assessment knowledge. This is also in agreement with Lam (2019) study in Hong Kong whereby he reported that participants had basic perceived level of knowledge base. This low level of assessment knowledge among Iranian writing teachers underlines the necessity for them to improve their assessment knowledge on different aspects of writing assessment and while it may be difficult, there appears to be a pressing need to give training to teachers on various aspects of writing assessment. The necessity for teacher education courses to pay close attention to writing instruction and assessment resonates in a number of previous studies on writing assessment in Iran (Ataie-Tabar et al., 2019; Marefat & Heydari, 2018; Nemati et al., 2017; Soltanpour & Valizadeh, 2019). To find more about the gaps in participants’ knowledge base, writing teachers’ current levels were compared to Taylor’s (2013) profile. The results indicated that participants were somehow functionally literate in all four thematic areas of “technical skills”, “pedagogical skills,” “principles and concepts,” and “scores and decision making.” In Lan and Fan’s (2019) study, the Chinese participants’ classroom-based language assessment literacy was also nearly at the functional level while they desired to have assessment literacy at the procedural and conceptual literate to understand the principles and theory of classroom-based language assessment and to apply that knowledge in practical domains. This gap emphasizes more professional training and determines an outline for future teacher education and professional development programs. Teachers need to be trained in these four components, particularly, “technical skills” and “linguistic pedagogy” which are all considerably lower than Taylor’s postulated levels. Teachers must have the required pedagogical skill to employ a variety of assessment methods in the classroom, including formative assessment methods, to identify learners’ strengths and weaknesses, use assessments to inspire them to learn, evaluate their progress, and inform learning or teaching goals. They must have the relevant technical skills to design appropriate items or tasks for a specific assessment purpose, analyze the quality of individual items or tasks using statistics, and utilize appropriate rating scales (Kremmel & Harding, 2019). They must also have a better awareness of the conceptual and principle components of writing assessment, including reliability, validity, and ethical issues. As a result, among the four areas of concern for further professional training, “language pedagogy” followed by "technical skill" are the areas of primary concern, with certain aspects in particular in need of training, such as formative assessment methods and assessment rubric. These findings collate with Mellati and Khademi ’s study (2018), which reported that teacher education programs should focus on practical and pedagogical aspects of assessment to enable teachers to apply theoretical knowledge about selecting assessment methods, scoring and interpreting the data, making proper decisions about students and instructional practice, and communicating the results to different stakeholders. These findings imply that English teacher education programs in Iran require some modification and adjustment as regards teaching writing assessment. Provision for stand-alone writing assessment courses which deals with both theoretical and practical aspects of writing assessment is among the mandatory steps. As previous studies expressed concern for the neglect of writing and writing assessment in ESL context (Hirvela & Belcher, 2007; Weigle, 2002), this study showed the same concern in Iran, where teacher training programs tend to give writing assessment little attention as it is often the topic for one or just a few sessions, and many programs may ignore it at all.

With regard to conception, majority of the participants had a positive conception towards writing assessment, assuming that writing assessment is interesting and challenging. Majority of them felt confident while assessing writing. A large majority of participants preferred alternative assessments, including portfolio assessment and self/peer assessment. Although these results needs to be interpreted with caution since they depend on self-report questionnaire and teachers might have tried to conform to contemporary assessment norms rather than reflecting their actual conceptions. These findings confirm Lam (2019) study in which majority of Hong Kong teachers had positive conceptions of classroom-based writing assessment and favored alternative types of assessments. This finding is interesting since writing has a long history of neglect in Iran both in the L1 and L2 curriculum and even those programs with writing are dominated by product approaches to writing and traditional assessment methodologies. Given that teachers’ previous experience with writing instruction and assessment shape their views and value system (Brown, 2004, 2008; Brown & Gao, 2015), the participants indicated a change in their assessment conceptions from traditional to more process-oriented and innovative assessment methods. This change in conception can be explained by teachers’ repeated practice, self-learning, and experience over years. This finding is in line with Vogt et al. (2020) who maintained that experience can overshadow the effects of teacher education.

With regard to teachers’ common assessment practice, the results of the analysis indicated the majority of participants preferred take-home essay writing followed by in-class essay writing. The results also confirmed that participants less frequently used innovative assessment methods, including self and peer assessment, small group writing projects, portfolio assessment, student journals, and student-teacher conferences. A very small number of participants use multiple drafting, meaning majority of participants took a product approach to writing. Looking at the results, it can be seen that teachers’ practice has more a summative than formative function. Despite their positive conceptions towards innovative assessment methods, they found it difficult to maintain these practices due to the lack of proper assessment knowledge or contextual boundaries such as national assessment policy. According to Brown and Gao (2015), teachers’ assessment conceptions is a complex interpretive system that may or may not be realized into practice and influence the way teacher conduct assessments in classroom. The results of the study revealed a gap between teachers’ conceptions of writing assessment and the related practice in reality. While the majority of participants valued innovative assessment, a few of them reported employing them in classroom. James and Pedder (2006) similarly indicated that while most teachers thought formative assessments were important, there was a difference between what they thought and what they actually did. This difference between what teachers claim to value and what they do in reality is concerning because it implies that there are other contributing variables in practice that predominate a widely held conception, including lack of proper knowledge and national assessment policy and requirements.

Regarding the effect of contextual and experiential factors on teachers’ WAL, it was found that writing assessment knowledge and practice were not influenced by those factors. The findings support Tao (2014), who found that experience had no effect on instructors’ classroom assessment knowledge, but contradict Alkharusi (2011), Farhady and Tavassoli (2018), and Crusan et al. (2016) who found that teachers with more teaching experience had higher assessment knowledge than those with less teaching experience. With regard to context, Farhady and Tavassoli achieved a different result claiming that public school EFL teachers were more knowledgeable about language assessment subjects than EFL teachers in private institutes. However, context and experience significantly impacted teachers writing assessment conceptions. It is clear that the results on the impact of teaching context and experience on assessment knowledge, conception, and practice are somehow contradicting. In order to have a better understanding of the nature of classroom assessment literacy in general and writing assessment literacy in particular, more study is needed to investigate the relationship between those factors and their classroom assessment literacy level in different contexts via different instruments.

The study is not without its limitations. Convenience sampling and sample size are clear constraints of this study. The participants in this study were selected based on their availability and this can limit the generalizability of the findings to other contexts. Furthermore, the study's sample size was small, which might have influenced the outcomes. The other limitation is related to the data collection procedure, which included a test and a self-report questionnaire. Including other data gathering methods, such as observation or requiring teachers to assess students’ writing according to some criteria would, of course, offer more in-depth information about how teachers plan and implement their writing assessment in real classroom contexts.

Conclusions

The changing needs of Iranian students and academics concerning English writing and the inadequacy of current writing pedagogy calls for policy maker’s attention to reexamine the L2 writing programs in Iran (Marefat & Heydari, 2018). One of the important problems of L2 writing teachers in Iran is that professional training opportunities in both teaching and assessing of writing are lacking. In terms of writing assessment, teacher education courses are insufficient to provide them with conceptual and practical expertise. So it would be unwise to believe that general assessment courses can prepare pre-service teachers with the assessment knowledge they require for L2 writing assessment (Larsen, 2016). An important step to improve the current situation is undoubtedly training and equipping both pre-service and in-service teachers with appropriate knowledge of writing pedagogy and assessment. Provision for efficient teacher education and professional development programs are among the mandatory steps that must be implemented (Homayounzadeh & Razmjoo, 2021). It is assumed that high-quality and practical teacher training programs would result in high-quality teaching and learning (Lieberman & Darling-Hammond, 2012). As such, teacher education programs need to provide teachers with theoretical and practical assessment knowledge and maximize opportunities for them to improve their assessment literacy (Beziat & Coleman, 2015; Fulcher, 2012; Giraldo, 2021; Giraldo & Murcia, 2018). Some previous research concentrated on how proper training and practical engagement in writing assessment can provide writing teachers with the required knowledge and abilities to undertake classroom-based assessment (Ahmad, 2020; Ho & Yan, 2021; Lee, 2010; Min, 2013). Rather than embedding writing assessment within general language assessment courses, one way to help Iranian teachers enhance their classroom writing assessment literacy is to design stand-alone writing assessment courses (Crusan et al., 2016). Therefore, the first step of educating L2 writing teachers in Iran is to provide teachers with opportunities to change their old observation of apprenticeship with a new model of writing and writing assessment and provide them with opportunities for learning the conceptual and theoretical aspects of writing assessment as well as practical application of those abstract concepts by actually taking part in those activities.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Abbreviations

WAL:

Writing assessment literacy

ELT:

English language teaching

LAL:

Language assessment literacy

L2:

Second or foreign language

EFL:

English as a foreign language

AoL:

Assessment of learning

AfL:

Assessment for learning

References

  • Abbasian, G., & Koosha, M. (2017). An investigation of Iranian university teachers’ assessment literacy: EFL teachers vs. field specialist ESP teachers. Foreign Language Research Journal (Pazhuhesh-e zabanhaye khareji), 7(1), 203–232.

    Google Scholar 

  • Afsahri, E., & Heidari Tabrizi, H. (2017). Iranian EFL Teacher's Assessment Literacy and Inclination towards the Use of Alternative Assessment. Journal of Applied Linguistics and Language Research, 4(4), 283–290 https://www.jallr.com ISSN: 2376-760X.

    Google Scholar 

  • Afshar, H. S., & Ranjbar, N. (2021). EAP teachers’ assessment literacy: From theory to practice. Studies in Educational Evaluation, 70, 101042.

    Article  Google Scholar 

  • Ahmad, Z. (2020). Teachers’ assessment of academic writing: Implications for language assessment literacy. In S. Hidri (Ed.), Perspectives on Language Assessment Literacy: Challenges for Improved Student Learning, (1st ed., pp. 159–175). Routledge.

    Chapter  Google Scholar 

  • Ahmadi, A., & Mirshojaee, S. B. (2016). Iranian English language teachers’ assessment literacy: The case of public school and language institute teachers. The Iranian EFL Journal, 12(2), 6–32.

    Google Scholar 

  • Alkharusi, H. (2011). Teachers’ classroom assessment skills: Influence of gender, subject area, grade level, teaching experience and in-service assessment training. Journal of Turkish Science Education, 8(2), 39–48.

    Google Scholar 

  • Ataie-Tabar, M., Zareian, G., Amirian, S. M. R., & Adel, S. M. R. (2019). A study of socio-cultural conception of writing assessment literacy: Iranian EFL teachers’ and students’ perspectives. English Teaching & Learning, 43(4), 389–409.

    Article  Google Scholar 

  • Beck, S. W., Llosa, L., Black, K., & Anderson, A. T. (2018). From assessing to teaching writing: What teachers prioritize. Assessing Writing, 37, 68–77.

    Article  Google Scholar 

  • Beziat, T. L., & Coleman, B. K. (2015). Classroom assessment literacy: Evaluating pre-service teachers. The Researcher, 27(1), 25–30.

    Google Scholar 

  • Birjandi, P., & Hadidi Tamjid, N. (2012). The role of self-, peer and teacher assessment in promoting Iranian EFL learners’ writing performance. Assessment & Evaluation in Higher Education, 37(5), 513–533.

    Article  Google Scholar 

  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principle, Policy & Practice, 5(1), 7–74.

    Google Scholar 

  • Borg, S. (2003). Teacher cognition in language teaching: A review of research on what language teachers think, know, believe and do. Language Teaching, 36(2), 81–109.

    Article  Google Scholar 

  • Brindley, G. (2001). Language assessment and professional development. Experimenting with Uncertainty: Essays in Honour of Alan Davies, 11, 137–143.

    Google Scholar 

  • Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practice, 30(1), 3–12.

    Article  Google Scholar 

  • Brown, G. T. (2004). Teachers’ conceptions of assessment: Implications for policy and professional development. Assessment in Education: Principles, Policy & Practice, 11(3), 301–318.

    Google Scholar 

  • Brown, G. T. (2008). Conceptions of assessment: Understanding what assessment means to teachers and students. Nova Science.

    Google Scholar 

  • Brown, G. T., & Gao, L. (2015). Chinese teachers’ conceptions of assessment for and of learning: Six competing and complementary purposes. Cogent Education, 2(1), 993836.

    Article  Google Scholar 

  • Carless, D. (2011) From Testing to Productive Student Learning: Implementing Formative  Assessment in Confucian-Heritage Settings. New York: Routledge.

  • Chapman, M. L. (2008). Assessment literacy and efficacy: Making valid educational decisions. Unpublished doctoral thesis. University of Massachusetts Amherst.

    Google Scholar 

  • Crusan, D. (2010). Assessment in the second language writing classroom. University of Michigan Press.

    Book  Google Scholar 

  • Crusan, D. (2022). Writing Assessment Literacy 77. In Research Questions in Language Education and Applied Linguistics: A Reference Guide, (p. 431).

    Google Scholar 

  • Crusan, D. and Matsuda, P.K. (2018). Classroom writing assessment. In: J. L. Liontas (Ed.), The TESOL Encyclopediaof English Language Teaching (pp. 1–7). New York: Wiley.

  • Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43–56.

    Article  Google Scholar 

  • Dappen, L., Isernhagen, J., & Anderson, S. (2008). A statewide writing assessment model: Student proficiency and future implications. Assessing Writing, 13(1), 45–60.

    Article  Google Scholar 

  • Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327–347.

    Article  Google Scholar 

  • Dempsey, M. S., PytlikZillig, L. M., & Bruning, R. H. (2009). Helping preservice teachers learn to assess writing: Practice and feedback in a Web-based environment. Assessing Writing, 14(1), 38–61.

    Article  Google Scholar 

  • Farhady, H., & Tavassoli, K. (2018). Developing a Language Assessment Knowledge Test for EFL Teachers: A Data-driven Approach. Iranian Journal of Language Teaching Research, 6(3), 79–94.

    Google Scholar 

  • Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132.

    Article  Google Scholar 

  • Fulmer, G. W., Lee, I. C., & Tan, K. H. (2015). Multi-level model of contextual factors and teachers’ assessment practices: An integrative review of research. Assessment in Education: Principles, Policy & Practice, 22(4), 475–494.

    Google Scholar 

  • Genesee, F., & Hamayan, E. V. (1994). Classroom-based assess-ment. In F. Genesee (Ed.), Educating second language children. University Press.

    Google Scholar 

  • Giraldo, F. (2021). Language assessment literacy and teachers’ professional development: A review of the literature. Profile Issues in TeachersProfessional Development, 23(2), 265–279.

    Article  Google Scholar 

  • Giraldo, F., & Murcia, D. (2018). Language Assessment Literacy for Pre-Service Teachers: Course Expectations from Different Stakeholders. GiST Education and Learning Research Journal, 16, 56–77.

    Article  Google Scholar 

  • Hamp-Lyons, L. (2007). The impact of testing practices on teaching. In International handbook of English language teaching, (pp. 487–504). Springer.

    Chapter  Google Scholar 

  • Hirvela, A., & Belcher, D. (2007). Writing scholars as teacher educators: Exploring writing teacher education. Journal of Second Language Writing, 3(16), 125–128.

    Article  Google Scholar 

  • Ho, E. C., & Yan, X. (2021). Using community of practice to characterize collaborative essay prompt writing and its role in developing language assessment literacy for pre-service language teachers. System, 101, 102569.

    Article  Google Scholar 

  • Homayounzadeh, Z., & Razmjoo, S. A. (2021). Examining'Assessment Literacy in Practice'in an Iranian Context: Does it Differ for Instructors and Learners? Teaching English as a Second Language (Formerly Journal of Teaching Language Skills), 40(2), 1–45.

    Google Scholar 

  • Inbar-Lourie, O. (2008). Constructing a language assessment knowledge base: A focus on language assessment courses. Language Testing, 25(3), 385–402.

    Article  Google Scholar 

  • James, M., & Pedder, D. (2006). Beyond method: Assessment and learning practices and values. The Curriculum Journal, 17(2), 109–138.

    Article  Google Scholar 

  • Kremmel, B., & Harding, L. (2019, March). Exploring the Language Assessment Literacy of SLA researchers. In 2019 conference of the American Association for Applied Linguistics (AAAL). AAAL.

    Google Scholar 

  • Lam, R. (2019). Teacher assessment literacy: Surveying knowledge, conceptions and practices of classroom-based writing assessment in Hong Kong. System, 81, 78–89.

    Article  Google Scholar 

  • Lan, C., & Fan, S. (2019). Developing classroom-based language assessment literacy for in-service EFL teachers: The gaps. Studies in Educational Evaluation, 61, 112–122.

    Article  Google Scholar 

  • Lantolf, J. P., & Poehner, M. E. (2004). Dynamic assessment of L2 development: Bringing the past into the future. Journal of Applied Linguistics, 1(1), 49–74.

    Article  Google Scholar 

  • Larsen, D. (2016). Pre-service teacher preparation for L2 writing: Perspectives of in-service elementary ESL teachers. In Second language writing in elementary classrooms, (pp. 172–190). Palgrave Macmillan.

    Google Scholar 

  • Lee, I. (2010). Writing teacher education and teacher learning: Testimonies of four EFL teachers. Journal of Second Language Writing, 19(3), 143–157.

    Article  Google Scholar 

  • Lee, I. (2017). Classroom Writing Assessment and Feedback in L2 School Contexts. Springer Nature Singapore Pte Ltd.

    Book  Google Scholar 

  • Lee, I., & Coniam, D. (2013). Introducing assessment for learning for EFL writing in an assessment of learning examination-driven system in Hong Kong. Journal of Second Language Writing, 22(1), 34–50.

    Article  Google Scholar 

  • Lieberman, A., & Darling-Hammond, L. (Eds.) (2012). High quality teaching and learning: International perspectives on teacher education. Routledge.

    Google Scholar 

  • Lortie, D. C. (1975). Schoolteacher. University of Chicago Press.

    Google Scholar 

  • Marefat, F., & Heydari, M. (2018). English Writing Assessment in the Context of Iran: The Double Life of Iranian Test-Takers 67. In T. Ruecker, & D. Crusan (Eds.), The Politics of English Second Language Writing Assessment in Global Contexts, (pp. 67–81). Routledge.

    Chapter  Google Scholar 

  • Mellati, M., & Khademi, M. (2018). Exploring teachers’ assessment literacy: Impact on learners’ writing achievements and implications for teacher development. Australian Journal of Teacher Education, 43(6), 1–18.

    Article  Google Scholar 

  • Mertler, C. A. (2003). Pre-service versus in-service teachers’ assessment literacy: Does classroom experience make a difference? In Paper presented at the annual meeting of the Mid-Western Educational Research Association, October 15-18, Columbus, Ohio.

    Google Scholar 

  • Min, H. T. (2013). A case study of an EFL writing teacher’s belief and practice about written feedback. System, 41, 625–638.

    Article  Google Scholar 

  • Naghdipour, B. (2016). English writing instruction in Iran: Implications for second language writing curriculum and pedagogy. Second Language Writing Journal, 32, 81–87.

    Article  Google Scholar 

  • Nemati, M., Alavi, S. M., Mohebbi, H., & Masjedlou, A. P. (2017). Teachers’ writing proficiency and assessment ability: The missing link in teachers’ written corrective feedback practice in an Iranian EFL context. Language Testing in Asia, 7(1), 21. https://doi.org/10.1186/s40468-017-0053-0.

    Article  Google Scholar 

  • Ölmezer-Öztürk, E., & Aydin, B. (2018). Toward measuring language teachers’ assessment knowledge: Development and validation of Language Assessment Knowledge Scale (LAKS). Language Testing in Asia, 8(1), 1–15.

    Article  Google Scholar 

  • Opre, D. (2015). Teachers’ conceptions of assessment. Procedia-Social and Behavioral Sciences, 209, 229–233.

    Article  Google Scholar 

  • Phipps, S., & Borg, S. (2009). Exploring tensions between teachers’ grammar teaching beliefs and practices. System, 37(3), 380–390. https://doi.org/10.1016/j.system.2009.03.002.

    Article  Google Scholar 

  • Pill, J., & Harding, L. (2013). Defining the language assessment literacy gap: Evidence from a parliamentary inquiry. Language Testing, 30(3), 381–402.

    Article  Google Scholar 

  • Plakans, L., & Gebril, A. (2015). Assessment myths: Applying second language research to classroom teaching. University of Michigan Press.

    Book  Google Scholar 

  • Plake, B. S. (1993). Teacher assessment literacy: Teachers’ competencies in the educational assessm.

    Google Scholar 

  • Price, J. K., Light, D., & Pierson, E. (2014). Classroom assessment: a key component to support education transformation. In R. Huang, E. Kinshuk, & J. K. Price (Eds.), ICT in Education in Global Context: Emerging Trends Report 2013-2014. Springer.

    Google Scholar 

  • Rahimi, M. (2009). The role of teacher’s corrective feedback in improving Iranian EFL learners’ writing accuracy over time: is learner’s mother tongue relevant? Reading and Writing, 22(2), 219–243.

    Article  Google Scholar 

  • Rezaei Fard, Z., & Tabatabaei, O. (2018). Investigating Assessment Literacy of EFL Teachers in Iran. Journal of Applied Linguistics and Language Research, 5(3), 91–100 https://www.jallr.com ISSN: 2376-760X.

    Google Scholar 

  • Shepard, L. A. (2000). The role of classroom assessment in a learning culture. Educational Research, 29(7), 4–14.

    Article  Google Scholar 

  • Soltanpour, F., & Valizadeh, M. (2019). Iranian EFL Teachers’ Writing Assessment Beliefs, Literacy, and Training Needs: Do Majors Matter? Journal on English Language Teaching, 9(2), 26–41.

    Google Scholar 

  • Tao, N. (2014). Development and validation of classroom assessment literacy scales: English as a foreign language (EFL) teachers in a Cambodian higher education setting. PhD dissertation. Australia: Victoria University. http://vuir.vu.edu.au/25850/1/Nary%20Tao.pdf

    Google Scholar 

  • Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403–412.

    Article  Google Scholar 

  • Tayyebi, M., & Abbasabadi, M. M. (2020). Classroom-based Writing Assessment Literacy Components and Needs: Perspectives of In-service EFL Teachers in Iran. Language Research, 10(3), 588–601.

    Google Scholar 

  • Vogt, K., Tsagari, D., & Spanoudis, G. (2020). What Do Teachers Think They Want? A Comparative Study of In-Service Language Teachers’ Beliefs on LAL Training Needs. Language Assessment Quarterly, 17(4), 386–409.

    Article  Google Scholar 

  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.

    Google Scholar 

  • Weigle, C. S. (2002). Assessing writing. Cambridge University Press.

    Book  Google Scholar 

  • Weigle, S. C. (2007). Teaching writing teachers about assessment. Journal of Second Language Writing, 16(3), 194–209.

    Article  Google Scholar 

  • Xu, Y., & Brown, G. T. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149–162.

    Article  Google Scholar 

  • Xu, Y., & Liu, Y. (2009). Teacher assessment knowledge and practice: A narrative inquiry of a Chinese college EFL teacher's experience. Tesol Quarterly, 43(3), 492–513.

    Article  Google Scholar 

  • Zhang, Z. (1996). Teacher assessment competency: A Rasch model analysis. In Paper presented at the Annual Meeting of the American Educational Research Association, April 8-12. (ERIC Reproduction Service No. ED 400 322).

    Google Scholar 

Download references

Acknowledgements

The authors wish to express their gratitude to all EFL teachers who voluntarily participated in the study. The authors would also like to thank the editor and all to the anonymous reviewers for their insightful comments on our paper.

Funding

This study received no funding.

Author information

Authors and Affiliations

Authors

Contributions

The authors made equal contributions to carry out this research and prepare the manuscript. All three authors approved the final manuscript.

Corresponding author

Correspondence to Mahmoud Moradi Abbasabady.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tayyebi, M., Abbasabady, M.M. & Abbassian, GR. Examining classroom writing assessment literacy: a focus on in-service EFL teachers in Iran. Lang Test Asia 12, 12 (2022). https://doi.org/10.1186/s40468-022-00161-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40468-022-00161-w

Keywords