- Research
- Open access
- Published:
Construction and validation of a Computerized Formative Assessment Literacy (CFAL) questionnaire for language teachers: an exploratory sequential mixed-methods investigation
Language Testing in Asia volume 14, Article number: 33 (2024)
Abstract
The contributions of Computerized Formative Assessment (CFA) to teachers’ instructional practices and students’ learning outcomes necessitate the development of valid and reliable instruments for measuring teachers’ literacy. To address this necessity, this study adopted an exploratory sequential mixed-methods design via drawing on a dual deductive–inductive approach to develop and validate a Computerized Formative Assessment Literacy (CFAL) questionnaire for language teachers. The participants comprised 489 Iranian male and female English as a Foreign Language (EFL) teachers teaching from elementary to advanced levels at different language institutes in eight big cities across the country. The results of Exploratory Factor Analysis (EFA) revealed that CFAL consisted of six factors including practical, theoretical, socio-affective, critical, identity-related, and developmental. The results of Cronbach’s alpha demonstrated that the questionnaire and its six components had satisfactory levels of reliability. The findings are discussed and implications for developing professional programs aiming at determining and promoting teachers’ CFAL are presented.
Introduction
Due to the integral role of assessment data in making informed educational decisions (Oo et al., 2021; Will et al., 2019) which can contribute to improved instructional practices (De Simone, 2020; Mandinach & Gummer, 2016) and positively influence students’ learning outcomes (Leenknecht et al., 2021; Wylie, 2020; Yan & Chiu, 2022), assessment in general and Formative assessment (FA) in particular constitute a pivotal dimension in educational discourse (Tomasine, 2022). FA is characterized as an assessment type which encompasses “all those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged” (Black & Wiliam, 1998, p. 7). FA features a set of activities undertaken by teachers and learners to gather feedback information so as to assist teachers in making modifications to their instructional practices and help adjust learners’ activities to provide assistance in determining the learning gaps, offering potentials for scaffolded learning, and guiding future instructional and assessment practices (Andersson & Palm, 2017; Antoniou & James, 2014; Bulut et al., 2020; Fukuda et al., 2022). Due to its potentials in fostering learning outcomes, FA has attracted the attention of scholars in the field of Second Language Acquisition (SLA). CFA, characterized as assessment methods deploying computers to enhance the management and implementation of instructional assessment (Webb et al., 2013), has also caught the attention of SLA scholars. The bulk of recent investigations into conventional FA (e.g., Fukuda et al., 2022; Patra et al., 2022; Teng, 2022) and CFA (e.g., Bulut et al., 2020; Gierl et al., 2018; Shin et al., 2022; Yildirim-Erbasli & Bulut, 2022) is a confirmation seal on the salient role of FA in both mainstream and SLA educational assessment domains.
Should teachers have the capability to implement appropriate FA and CFA practices, aiming at making informed instructional decisions, they need to possess a satisfactory level of Assessment Literacy (AL) (Husain, 2021; See et al., 2021). AL, with its origins in general education (Stiggins, 1991), refers to the knowledge, skills, and principles required for performing assessment tasks by stakeholders involved in assessment responsibilities (Coombe et al., 2020; Rauf & McCallum, 2020). AL is defined as an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions (Popham, 2011; Soh & Zhang, 2017; Will et al., 2019). As Crusan et al. (2016) contend, “assessment literacy is not just about content or delivery but how this content is enmeshed with teachers’ knowledge, belief, and practices” (p. 45).
Closely related to FA is the concept of Formative assessment literacy (FAL). FAL is characterized as teachers’ repertoire of knowledge, skills, and principles to make adaptations to their assessment-based instructional practices in accordance with learners’ needs to provide attuned feedback (Bennett, 2011). Teachers, drawing on their AL repertoire rooted in their belief systems related to FA, make such adaptations to provide students with guidance in their learning processes to optimize learning outcomes. Notwithstanding the paramount importance of FAL in contributing to improved classroom instructional practices (Torrance, 2012), and the accountability pressures on teachers for employing assessment data in making informed educational decisions (Wayman et al., 2012), teachers struggle with the employment of FA (Furtak et al., 2016). Teachers’ struggle with FA can be partly explained due to the complexities involved in this mode of assessment (Elwood, 2006). Moreover, the results of some investigations have revealed teachers’ unwillingness (e.g., Brown, 2004; Remesal, 2007) in terms of changing or adjusting their assessment practices to accommodate the ever-developing conceptualizations of assessment and consequently addressing learners’ educational needs (Brookhart, 2011). With the fast-growing use of computers in the domain of assessment, such lack of accommodation on teachers’ part can be observed in relation to CFA as well (Charman, 2013).
The extant complexities and challenges involved in the conceptualization and implementation of CFA necessitate the development and validation of sound and reliable instruments which can provide assistance in designing professional programs aiming at developing teachers’ CFAL. CFAL is defined as the teachers’ knowledge, skills, and principles of formative assessment to deliver instructional assessment via computers (Bulut et al., 2020; Shin et al., 2022; Yildirim-Erbasli & Bulut, 2022). In this study, CFAL was conceptualized as a construct encompassing six factors including practical, theoretical, socio-affective, critical, identity-related, and developmental based on the previous literature (e.g., Leenknecht et al., 2021; Looney et al., 2018; Pastore & Andrade, 2019; Tajeddin et al., 2022; Wylie, 2020; Yan & Cheng, 2015; Yan & Pastore, 2022) and newly-emerging themes from the collected data.
A review of the literature on investigations on AL indicates that, thus far, some studies (e.g., Campbell et al., 2002; Fulcher, 2012; Mayo, 1967; Mertler & Campbell, 2005; Plake et al., 1993; Soh & Zhang, 2017; Tajeddin et al., 2022; Zhang & Burry-Stock, 1997) have developed instruments for measuring AL. Moreover, a scant number of studies (e.g., Cagasan et al., 2020; Wylie, 2020; Yan & Pastore, 2022) have developed instruments for measuring FAL in particular. However, none of the available investigations has made attempts at developing an instrument for measuring CFAL. Therefore, the current study, in an endeavor to address this lacuna in the current literature, set out to develop and validate a CFAL questionnaire for language teachers.
Conceptual framework
Assessment literacy
AL, as a vital professional requirement in educational systems, is characterized as “an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions” (Popham, 2011, p. 267). The concept of AL incorporates particular assessment behaviors including employment of multiple assessments in alignment with clearly pinpointed achievement goals, interpretation of learner performance in regard to the adopted mode of assessment and the potential role of external factors, scoring and administration of assessments in an appropriate way, communication of the assessment results to interested stakeholders, and consideration of legal and ethical norms in carrying out assessment responsibilities (Brookhart, 2011; Stiggins, 1991). Given the significance of AL, it has been underscored as a salient component in teacher evaluation frameworks and models (see Danielson, 2013; Marzano, 2013). Marzano’s teacher evaluation model (2013) places emphasis on teachers’ competence in deploying assessments to trace learner progress and document a lesson in terms of its effectiveness. Danielson’s framework for teaching (2013) requires that proficient teachers illustrate skills in how to design student assessment, employ assessment in the process of instruction, and identify high-quality data sources for keeping track of learner development.
Highlighting a process-oriented approach towards AL, Popham (2006) and Brookhart (2011) concur that the perspective on assessment as just test scores and grades should be developed to incorporate how teachers design assessment methods to gather data about student development, analyze the gathered data, and modify their instructional practices to yield most effectiveness for learning in alignment with the collected data. Accordingly, AL should address both the teachers’ conceptualization and practices of assessment based on the assessment data to influence learning outcomes and teachers’ decision-making in an informed and systematic way. Said it another way, AL as a process is not only concerned with the assessment principles and the associated assessment knowledge but also relates to assessment skills required from teachers in the domain of designing and administering tests (Coombe et al., 2020; Rauf & McCallum, 2020).
Apart from the tripartite of principles, knowledge, and skills as the major dimensions of AL, another important aspect of AL is critical language AL which is concerned with teachers’ knowledge of assessment objectives, scopes, types, assessment use consequences, fairness, assessment policies, and national policy and ideology (Tajeddin et al., 2022). Moreover, Looney et al. (2018) have emphasized the ethical aspects of assessment and regarded AL as a concept which is connected with teacher identity. Accordingly, the role of teachers’ beliefs, life and teaching experiences, and feelings should be regarded relevant to provide a better picture of assessment literacy (Yan & Pastore, 2022).
Formative assessment
FA features a set of activities undertaken by teachers and learners to gather information to be drawn upon as feedback to make modifications on teachers’ instructional practices and adjust learners’ activities (Black & Wiliam, 1998). FA, based on the collected information, enhances students’ self-regulation (Andrade & Heritage, 2018), provides assistance in determining the learning gaps, offers potential for scaffolded learning, and guides future instructional practices (Bennett & Gitomer, 2009). Due to its critical role in improving learning achievements (e.g., Andersson & Palm, 2017; Yan & Chiu, 2022); providing opportunities for innovation in learning environments (e.g., Wylie, 2020); enhancing writing (e.g., Graham et al., 2015; Teng, 2022); reading (e.g., Xuan et al., 2022; Yan & Chiu, 2022); speaking (e.g., Bagheri Nevisi & Mohammad Hosseinpur, 2022); and listening (e.g., Ghazizadeh & Motallebzadeh, 2017), FA has been favored as an influential mode of assessment.
Despite the influential role of FA in enhancing different language skills, one of the major issues of concern is that teachers are rather reluctant to employ this assessment mode (Remesal, 2007) and consequently FA is scarcely used by teachers (Yan & Pastore, 2022). In addressing such concern, Antoniou and James (2014) underscore the need for a more elaborate conceptualization of an established set of FA practices to advocate the learning process through the use of FA. In such conceptualization, teachers’ beliefs and voices should be acknowledged so as to both capture their realities of classroom practice and concomitantly assist teachers in making more informed decisions in addressing the complexities involved in the teaching–learning process (Cizek et al., 2019; Gotwals & Cisterna, 2022; Yan & Pastore, 2022).
The complexities involved in FA stem from the challenges teachers encounter in understanding FA value, and their lack of FA-related knowledge (Schneider & Bodensohn, 2017). Although teachers acknowledge the vital role of assessment evidence, they are not well-equipped with the management skills to orchestrate multiple information sources to understand and deliver attuned FA (Yan & Pastore, 2022). As Yan and Brown (2021) maintain, the lack of understanding and knowledge in relation to FA contributes to an inappropriate level of literacy which renders a lack of capability for drawing on assessment data in making informed instructional decisions.
Formative assessment literacy
The findings of previous investigations (e.g., Yan & Cheng, 2015) have pointed out that the practice of FA at the classroom level is not satisfactory. Similarly, some studies (e.g., Wylie & Lyon, 2015) have found that FA is scarcely used by teachers. Highlighting the challenges of FA, Stiggins (2017) has shown the misalignment of FA practices with educational policies and principles. Such complexities and challenges reveal that teachers lack a satisfactory level of FAL to deliver FA appropriately to achieve educational objectives and enhance learning outcomes via a systematic approach to FA, both at conceptual and practical levels. To address such challenges, some studies have aimed at developing instruments for measuring FAL to contribute to a better understanding of FA.
Some investigations (e.g., Cagasan et al., 2020; Wylie, 2020), aiming at providing support for the effective implementation of FA, have employed observation protocols and rubrics to demonstrate FA practices implemented via “in vivo.” However, such studies have only considered one aspect of FA literacy. Quite recently, Yan and Pastore (2022) developed and validated an instrument consisting of three dimensions, namely, conceptual, practical, and socio-emotional for measuring FAL. As the review of such studies indicates, the participants have been mainly primary or secondary school teachers and not language teachers in general or EFL teachers in particular. Moreover, such investigations have addressed conventional FAL and have not delved into CFA.
Computerized formative assessment
CFA is characterized as the use of computers in test design, administration, and interpretation of test scores and assessment results to be used as feedback for learners to help them adjust their learning activities and assist teachers in modifying their instructional practices (Bulut et al., 2020; Shin et al., 2022; Yildirim-Erbasli & Bulut, 2022). CFA can be applied for different purposes including the identification of low-achievers or tracking students’ achievement or progress within a specified duration of time (Bulut et al., 2020). CFA provides chances for observing and analyzing an individual learner’s or a group of learners’ progress based on item response times (Choi & McClenen, 2020); process data (Lu, 2022); and user interaction history (Granić, 2008).
The use of e-learning management systems, web-based adaptive learning systems, and online FA systems have been on the rise recently (Angelone et al., 2022; Cham, et al., 2022; Choi & McClenen, 2020; Chrysafiadi et al., 2022). In the same vein, due to the beneficial effects of computerized assessment in general and CFA in particular, the employment of CFA in various educational contexts has received focal attention from scholars (e.g., Bulut et al., 2020; Charman, 2013; Gierl, 2018; Shin et al., 2022; Shirley & Irving, 2015; Yildirim-Erbasli & Bulut, 2022). However, similar to FA, the implementation of CFA has not been without its challenges.
The practice of CFA places burdens on teachers not only in regard to FA principles, knowledge, skills, and consideration of the critical dimension of FA but also seeks responsibility from teachers in terms of computer-related literacy in relation to computerized formative test design and administration (McNeil, 2018; Perry et al., 2022; Tsagari & Vogt, 2017). Accordingly, teachers should develop their computer literacy in respect to the conceptualization of CFA and its implementation-related dimension. Aiming at addressing the associated challenges with CFA to provide a basis for developing professional programs to promote language teachers’ CFAL, and considering the importance of language-related, content-specific nuances and computer-related, context-specific particularities in FAL, the present study sought to develop and validate an instrument for measuring CFAL.
Methods
Participants
The participants, selected based on convenience sampling, comprised 489 Iranian EFL teachers teaching from elementary to advanced levels at different language institutes in eight big cities across the country. They consisted of 238 male (48.67%) and 251 female teachers (51.33%). They aged between 21 and 51 (M = 36.5, SD = 8.40) and their teaching experience ranged from 1 to 21 years (M = 14.7, SD = 6.80). All participants had experience of using CFA. Their experience of using CFA fell within the range of 1 to 17 years (M = 12.85, SD = 5.4). Table 1 displays the detailed demographic information of the participants.
To recruit the participants, the researcher initially sought the approval of the educational managers of 14 language institutes across the country. These 14 language institutes had 81 branches in total. To make sure that only teachers with CAF experience were recruited, the researcher briefed the institute managers on the purposes of the study. Then, the managers were sent an email to forward to the branch managers to provide the prospective participants with information regarding the aims of the study and data collection. In the email, the teachers were requested to take part in the study provided that they had at least one year of experience in CFA. Thus, the sampling procedure was of convenience type as only those teachers who expressed their willingness to take part in the study were included. Convenience sampling was adopted since it was not feasible for the researcher to recruit participants based on pure randomized procedures.
Instruments
Semi-structured and focus group interviews were employed to collect the qualitative data required for the development of the items. The two types of interviews were drawn upon to gather enriched data. To develop the semi-structured interview questions, initially the pertinent literature on the conceptualizations of AL (e.g., Coombe et al., 2020; Fulcher, 2012; Looney et al., 2018; Pastore & Andrade, 2019; Popham, 2006, 2011; Soh & Zhang, 2017; Tajeddin et al., 2022) in general and FAL (e.g., Cagasan et al., 2020; Wylie, 2020; Yan & Pastore, 2022) and CFA (e.g., Bulut et al., 2020; Charman, 2013; Choi & McClenen, 2020; Gierl et al., 2018; Shin et al., 2022; Yildirim-Erbasli & Bulut, 2022) in particular was extensively reviewed. Based on the extant literature, an initial list of ten questions was developed. This initial list was given to a panel of experts consisting of six PhD holders with at least 15 years of experience in educational assessment for review. Three of the questions were excluded due to containing overlapping content and a final list of seven questions (Appendix 1) was approved by the panel. Next, this finalized list was piloted on five of the teachers selected randomly from the pool of participants to identify any ambiguities in terms of content. Minor modifications were made to the questions and the final list of seven questions was prepared.
The development of the CFAL questionnaire
To contact the participants, initially, a list of 25 English language institutes located in different cities across the country was prepared. Then, the managers of the institutes were contacted via phone, and their consent to allow their teachers to take part in the study was sought. During the phone conversations, they were provided with brief information concerning the purposes of the study and data collection. Out of the 25 managers, 14 agreed to let their teachers participate in the study. Then, a consent form was sent to the 14 managers, and they were asked to send it over to those teachers willing to take part in the study. In the consent form, the teachers were provided with brief information regarding the aims of the study and data collection procedure. They were also assured that the collected data would be used only for research purposes and held confidential. Moreover, the participating teachers were given assurance that they could withdraw from the study at any stage they wished without giving prior notice.
An important phase for the development of the CFAL questionnaire concerned with establishing the theoretical framework in regard to the most current developments in relation to the conceptualizations of AL, FAL, and CFA to lay a rigorous conceptual foundation for guiding the interviews. Following the establishment of the conceptual foundation, the researchers prepared a finalized list of interview questions (N#7) as described in the instruments section. Then, 32 of the teachers, selected randomly from the list of participants, were contacted via Telegram and interviewed each for approximately 35 min. Moreover, a group of 15 teachers, selected randomly from the pool of 489 teachers, took part in a focused group interview lasting for 2 h.
Since the participants were EFL language teachers and Persian was their mother tongue, the researcher gave them the option to select the language for the interview sessions. They were given this choice to obviate the possible challenges arising from constrained language proficiency levels and any associated ambiguities stemming from the foreign language barrier. Thus, for the interviews with the individual teachers, the researcher selected the language based on the participant’s choice. As for the focused group interviews, the researcher obtained the groups’ choice and as a majority of the interview session attendants opted for their mother tongue, Persian was selected as the language for these sessions. After finishing the sessions, the interviews were recorded and transcribed verbatim and became subject to content analysis.
Content analysis was carried out by adopting the six stages proposed by Braun and Clarke (2006). These stages encompass the sequential progression of “1) getting familiar with the data 2) generating initial codes 3) searching for the themes 4) reviewing themes 5) defining and naming the themes and finally 6) writing up the report” (Braun & Clarke, 2006, p. 87). In order to ensure the reliability of the analytical procedures, an assistant was recruited to provide support in the content analysis (Hsieh & Shannon, 2005). Initially, a meticulous process of multiple readings of interview transcripts was undertaken to gain sufficient familiarity with the dataset. Concurrently, preliminary impressions were noted down and recorded for further reference. Subsequently, the data underwent a systematic process of coding, categorization, and condensation into meaningful units. The co-researcher independently conducted a content analysis of the data following a similar approach. Furthermore, inter-rater agreement was quantitatively assessed utilizing Holsti’s (1969) coefficient of reliability. The obtained value turned out to be 0.81 which surpassed the acceptable threshold of 0.70, indicative of satisfactory agreement between the two raters. As for the discrepancies between the outcomes derived by the primary researcher and those of the assistant, such disparities and controversies were scrutinized and resolved through discussions. Additionally, to enhance the credibility of the findings, member checking was undertaken in accordance with Nassaji’s (2020) recommendations. This involved engaging in discussions with eight participants to validate and confirm that the interpretations and conclusions drawn from the data were accurate and aligned with the participants’ perspectives.
The results of the content analysis along with the conceptual framework of the study were then used to develop an initial pool of 35 items reflecting different dimensions of CFAL including the practical, theoretical, socio-affective, critical, identity-related, and developmental components. In what follows, some exemplar interview excerpts and the corresponding themes identified are given.
Excerpt 1: I am quite familiar with the theories of CFA, not much familiar though. But, I know that through CFA I can identify the weaknesses and strengths of the learners. Then, based on these weaknesses and strengths I know what sort of content to deliver to my students. Also, CFA can help me get to know how to proceed with the assessment later. For instance, when a learner needs more assistance and content in relation to grammar, I know that I should focus on the assessment of grammar for that particular student.
Excerpt 2: I think my knowledge of CFA is quite good. I use formative assessment on a regular basis in my teaching and I sometimes study articles in relation to CFA to improve my teaching with CFA. I think the best aspect of CFA is that it can help you identify in what areas each learner needs feedback and help most. Based on this, I can then gear my feedback to individual learners. Overall, I think CFA can provide rich and usable data for each learner. The teacher can then go to this data bank whenever he wants to get to know how the learner has progressed and how to best provide help and feedback to that learner.
The themes extracted from excerpts one and two were theoretical knowledge and personalized instruction, which were used to develop the following two items:
-
I know the theories underlying CFA for language teaching and learning purposes.
-
I know how CFA helps me deliver personalized instruction to foster learning for each individual learner.
Excerpt 3: When using CFA, I always think about learners first. I know that assessment is mainly for the learners and their learning although it can also help the teacher teach better but the main focus should be on the learners. So, learners’ thoughts and whether they like or dislike this assessment mode are very important. If leaners do not understand and appreciate the way CFA can help them improve, the use of CFA will be rather useless.
Excerpt 4: When I want to use some new forms of assessment such as CFA, I always think about whether it affects learners’ attempts to continue their language learning. The main question I ask myself is, is the learners’ interest to continue language learning influenced positively or negatively while using the new assessment mode? Therefore, I always think about such learner-related issues when conducting CFA.
The themes extracted from excerpts three and four were affective aspects of CFA, the importance of learners’ attitudes and values, and the importance of learning motivation, which were drawn upon to construct the following two items:
-
I know that learners’ language learning motivation can be affected via CFA experiences.
-
I am conscious of how learners’ attitudes, beliefs, and values influence their CFA experiences.
Next, this initial list of items was submitted to the panel of experts including five PhD holders with more than 22 years of experience in educational assessment and CFA. All these experts were involved in teacher education programs related to educational assessment and CFA in the field of Teaching English as a Foreign Language (TEFL). The experts were provided with the list of items and asked to provide comments regarding the alignment of the items with the components. Based on the comments, two items were merged, and three items were discarded as they were identified as items which contained overlapping content with other items. Consequently, the approved draft of the questionnaire comprised 31 items. The practical component had 6 items and the theoretical component consisted of 7 items. The socio-affective and critical components had 5 items each and the identity-related and the developmental components comprised 4 items each. A 5-point Likert scale response format consisting of (1) strongly disagree, (2) disagree, (3) neutral, (4) agree, and (5) strongly agree was adopted for the items.
The approved draft was then piloted on 30 teachers having similar characteristics to the main participants to identify any ambiguities which would affect the readability and clarity of the items. Two of the items were revised in terms of lexicon and grammar. Following this, the questionnaire was distributed to the 489 EFL teachers. Out of the 489, seven questionnaires were not returned. Thus, the total number of questionnaires included in this study was 482. These questionnaires were then scored, and the data was fed into SPSS 26 for conducting Exploratory Factor Analysis (EFA).
Research design
This study adopted a sequential mixed-methods design in which initially qualitative data were collected and analyzed to develop and subsequently validate an instrument via quantitative data collection and analysis. For the qualitative phase, a dual deductive–inductive approach was adopted. In so doing, initially, the collected data were analyzed based on the dimensions of literacy provided by the literature on AL, FAL, and CFA. Then, any newly emergent themes from the semi-structured and focused group interviews were also identified and used to develop the questionnaire items.
Results
Prior to running EFA, the appropriateness of data structure for using EFA was checked. In so doing, initially the skewness and kurtosis measures of the items were examined. The results indicated that the related measures fell within the range of − 1.96 and + 1.96 and thus the data did not violate the normality assumption (Tabachnick & Fidell, 2013). Next, the Kaiser–Meyer–Olkin measure was examined to check the sampling adequacy. The KMO index turned out to be 0.86 which exceeded the recommended value of 0.6 (Field, 2013). Finally, the results of Bartlett’s test of sphericity showed that p < 0.01, thus signifying the appropriateness of running EFA. Upon checking the normality assumption and factorability appropriateness of the data, EFA was run. Table 2 presents the respective results.
As presented in Table 2, six extracted factors had eigenvalues higher than 1, explaining 23.74%, 19.15%, 16.76%, 13.72%, 12.87%, and 11.72% of the variances, respectively. Overall, the 6-factor solution explained 97.97% of the variance.
To graphically inspect the six identified factors, Scree plot was inspected. As shown in Fig. 1, a sudden fall of the Scree plot can be observed after the initial six factors.
To further explore the number of components and figure out which components to retain or drop in the developed instrument, parallel analysis was performed. Parallel analysis, calculating average eigenvalues based on randomly generated samples, provides evidence for supporting or rejecting the decision to retain or drop the identified components based on the eigenvalue indices and Scree plot information (Pallant, 2020). Table 3 presents the results of parallel analysis.
As indicated in Table 3, six components have initial eigenvalue indices larger than the simulated values. Accordingly, the decision was made to retain the six identified components.
Table 4 displays the pattern matrix showing the items belonging to each component.
As Table 4 demonstrates, items 8, 11, 9, 10, 7, 12, and 13 belonged to the first component (theoretical). Moreover, items 5, 4, 3, 2, 6, and 1 comprised the second component (practical), while items 16, 15, 18, 14, and 17 belonged to the third component (socio-affective) and items 19, 21, 22, 20, and 23 constituted the fourth component (critical). Furthermore, items 29, 30, 31, and 28 belonged to the fifth component (developmental), whereas items 25, 26, 24, and 27 were the constituents of the sixth component (identity-related). As noted in Table 4, all CFAL questionnaire items functioned well. Moreover, no overlaps were detected among the items loading on different factors. Accordingly, no items were discarded, and all 31 items were retained (See Appendix 2 for the final draft of the questionnaire). The satisfactory function of all the items and the non-existence of overlaps could be attributed to several reasons including the meticulous care exercised in developing the items, consideration of maximum variation in the selection of participants, and providing the participants with clear instructions to fill out the CFAL questionnaire. As for the first reason, the extensive review of literature, appeal to expert opinion, and the use of semi-structured as well as focus-group interviews may have all contributed to enriched qualitative data on the basis of which a well-structured questionnaire was designed at the outset. Additionally, as mentioned in the methods section, in the initial draft of the questionnaire, two items were merged and three were excluded due to containing overlapping content. If these items were incorporated in the final draft of the questionnaire, they could have most probably loaded on different factors or become subject to deletion as a result of quantitative data analysis. However, the identification, merging, and exclusion of these items have culminated in more accurate quantitative data for the EFA procedures. With regard to the second reason, recruiting a wide array of teachers in terms of overall teaching experience, the experience of CFA, age, and gender (see Table 1) could have led to the collection of data reflecting CFAL construct with a high level of distinction among various components. Thirdly, since maximum care was taken to provide the participants with clear instructions in filling out the questionnaire, they responded to the items attentively, which could have resulted in the collection of data accurately displaying the distinctive components of CFAL.
Table 5 displays the items belonging to each component along with the descriptive statistics of the items.
As indicated in Table 5, the mean of all the items is above 3 which indicates a moderate level of CFAL among the participants.
Upon exploring the factors in the CFAL data, Cronbach’s alpha was run to establish the reliability of the questionnaire. Table 6 presents the Cronbach’s alpha statistics for the whole questionnaire and its six extracted components.
As seen in Table 6, all the Cronbach’s alpha indices exceed 0.70 indicating that the instrument as a whole and its separate components enjoy acceptable levels of internal consistency. The high Cronbach’s alpha indices can be attributed to the large sample size (Karakaya & Alparslan, 2022). Moreover, meticulous care in designing a clear and well-structured instrument could have also contributed to the high internal consistency indices of the CFAL questionnaire in this study.
Discussion
This study aimed at developing and validating a CFAL scale via adopting an exploratory sequential mixed-methods design. In developing the scale, a dual deductive-inductive approach for qualitative data collection and analysis was adopted. The results were used to develop a 31-item questionnaire. The results of the EFA revealed a model with six factors showing that CFAL consisted of practical, theoretical, socio-affective, critical, identity-related, and developmental dimensions. The results of Cronbach’s alpha demonstrated that the developed questionnaire and its six components had a satisfactory level of reliability.
The results of this study concerning the theoretical and practical dimensions directly support the three-dimensional model of teacher assessment literacy proposed by Pastore and Andrade (2019). Pastore and Andrade’s (2019) model of teacher-assessment literacy comprised conceptual, practical, and socio-emotional dimensions. The results of this study in regard to the theoretical and practical dimensions directly resonate with their findings. However, in regard to the socio-emotional dimension, the results of the present study revealed a socio-affective dimension. Pastore and Andrade’s (2019) model was drawn upon by Yan and Pastore (2022) to develop a scale for developing a FAL scale for teachers. Their results directly substantiated the three-dimensional model as their findings confirmed the existence of conceptual, practical, and socio-emotional dimensions in their developed scale. However, in the present study apart from the practical, theoretical, and socio-affective facets, three other components including the critical, identity-related, and developmental dimensions were also uncovered. The existing disparity between the findings of the current study and those of Yan and Pastore (2022) could possibly be due to the reason that they adopted a deductive approach towards the development of their scale. However, in this study, a dual deductive–inductive approach was used which could have possibly contributed to the identification of the newly emergent themes and factors.
The results of the current study substantiated the findings in the extant literature concerning the existence of a critical dimension towards teacher assessment literacy. Similar to the results of the present study, Tajeddin et al. (2022) found a critical dimension for teacher assessment literacy consisting of teachers’ knowledge of assessment objectives, scopes, and types; assessment use consequences; fairness; assessment policies; and national policy and ideology. The results of the present study regarding the identity-related dimension of CFAL corroborate Looney et al.’s (2018) observation as they highlighted teachers’ role as assessors to be a significant identity-related aspect in regard to their assessment responsibilities. Although Looney et al. (2018) underscored the significance of teachers’ roles as assessors in relation to their identity, they did not link such roles to AL. But, in the present study, it was revealed that teachers’ knowledge of the principles regarding their identity is part of their AL.
The theoretical dimension of CFAL, in this study, incorporates teachers’ knowledge and literacy in terms of principles and theories underlying CFA for language teaching and learning purposes based on which teachers can obtain and interpret diverse data types to enhance language learning outcomes. Moreover, such knowledge and literacy can contribute to the identification of students’ learning needs which can assist teachers in delivering tailored and differentiated instruction. In addition, the knowledge of theories and principles lays the foundation for teachers to become familiar with different assessment methods of CFA for various language skills and components to develop teachers’ analytical knowledge repertoire to analyze and explain how CFA can assist learners in regulating their learning based on the feedback they receive. Such results echo the results of previous conceptual and empirical investigations (e.g., Coombe et al., 2020; Pastore & Andrade, 2019; Rauf & McCallum, 2020; Stiggins, 1991; Will et al., 2019; Yan & Pastore, 2022) in regard to the knowledge of principles and theories when it comes to AL in general and FAL and CFA in particular.
The practical dimension of CFAL is related to the implementation of different CFA tools and systems as well as the administration of tests to achieve course objectives. Moreover, the practical dimension displays the literacy of teachers in using computer-generated feedback information to help learners in identifying their learning needs and instructing them to identify the set assessment criteria. Such results corroborate the findings of previous research (e.g., Leenknecht et al., 2021; Wylie, 2020; Yan & Cheng, 2015) concerning the practical aspect of FAL.
The socio-affective component of CFAL encompasses teachers’ awareness of learners’ anxiety, motivation, attitudes, beliefs, values, agency, and emotions while involved in CFA experiences. Such results substantiate the results of previous research (e.g., Leenknecht et al., 2021; Patra et al., 2022; Yan & Pastore, 2022) concerning the pivotal role of learners’ affective side and their attitudes in relation to FA. The critical facet of CFAL comprises teachers’ literacy of the future consequences of CFA, the impacts of the use of CFA on developing a specific assessment culture, fairness issues, and awareness in terms of how national assessment policy and educational ideologies can impact CFA. Such results are congruent with the findings of previous studies (e.g., Schildkamp et al., 2020; Tajeddin et al., 2022) in acknowledging a critical aspect for AL. The identity-related component of CFAL highlights teachers’ literacy in terms of their role as assessors, their awareness in terms of agency and voice in this role, and teachers’ attitudes and beliefs as assessors. Such findings confirm the results of previous research (e.g., Looney et al., 2018; Yan & Cheng, 2015) in underscoring teachers’ identity and their attitudes in relation to FA.
The developmental component of CFAL constituted teachers’ literacy in terms of their awareness in developing theoretical knowledge of CFA and its relation to teachers’ professional development. The developmental dimension of CFAL was a novel theme in the current study, which has not been reported in the available literature. In essence, teachers’ awareness of their professional development should be considered as part of the knowledge and principles teachers need to possess in order to perform their assessment responsibilities more effectively and appropriately. Said another way, teachers’ consciousness of the significance of developing their theoretical and practical knowledge via attending workshops or taking part in other professional activities and how such knowledge can influence their instructional practices in terms of CFA should be considered as part of AL in general and CFAL in particular.
Based on previous studies, steady development and continuous learning have been shown to play an essential role in improving educational policy for teachers and teacher education, resulting in enhancing students’ learning outcomes (De Simone, 2020; Gore et al., 2021). However, the lack of effective instruments for measuring language assessment literacy has, for a long time, undermined the efforts aimed at planning teacher development programs and teacher assessment practices (See et al., 2021; Yan & Pastore, 2022). The educational research combined with professional development models need to illustrate the efficacy of formative assessment literacy and their contribution to teachers’ development. Thus, it is of pivotal importance to consider the interplay between the developmental dimension of language teachers’ CFAL and the professional development of teachers.
Conclusion
Attempting to lay the foundation for teachers’ professional development programs in terms of promoting their FAL (Yan & Pastore, 2022), this study developed and validated a questionnaire to contribute to the identification of CFAL components. The identification of such components can improve teachers’ awareness in terms of CFAL and, consequently, their assessment practices which can foster students’ learning outcomes (De Simone, 2020). Should teachers function as effective assessors, they need to possess and develop their literacy in terms of the six components of CFAL. The accumulation and development of such literacy can further consolidate the intersections between teacher education and assessment in promoting students’ learning outcomes (Stiggins, 2017) and linking professional development routes to improved teacher AL and assessment practice (Will et al., 2019).
An important leverage point for contributing to improved assessment-related instructional practices enhancing learning outcomes is teachers’ theoretical and practical knowledge of assessment. Thus, teacher educators, experts, and scholars in educational assessment can employ the CFAL as a platform to design professional development programs aiming at enhancing language teachers’ CFAL dimensions. The instrument can be used to identify the dimensions of CFAL in which teachers need more support and/or development. More specifically, as a factor related to teacher professionalism, the scale developed and validated in the current study can provide data on teachers’ profiles (strengths and weaknesses) in CFAL which can be capitalized on to deliver tailored education to teachers. Such data can be used to help teachers develop their CFAL, which can pave the way for the design of effective teacher training programs. Moreover, the instrument can also be used to evaluate the efficacy of teacher training programs or interventions aimed at fostering teachers’ CFAL. Thus, this six-dimensional model possesses the potential to pinpoint the training needs of EFL teachers’ CFAL. Teachers can also use the questionnaire for self-assessment purposes to identify in what facets of CFAL they require more knowledge and skills. Effective teacher education programs and teachers’ self-assessment in relation to CFAL may help foster teachers’ CFA practices which can ultimately promote students’ learning outcomes.
The present study was carried out in an EFL setting among language teachers. Since the teaching context can impact FA (Yan et al., 2021), future studies can validate the developed instrument across different teaching environments (for example ESL teaching contexts). Furthermore, since the implementation of FA may be different across diverse disciplines (Yan & Pastore, 2022), future studies may adapt and validate the CFAL questionnaire for various disciplines. Moreover, future studies may address the practice of CFAL to determine the factors constituting formative assessment practice in computerized assessment environments.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Andersson, C., & Palm, T. (2017). The impact of formative assessment on student achievement: A study of the effects of changes to classroom practice after a comprehensive professional development programme. Learning and Instruction, 49, 92–102. https://doi.org/10.1016/j.learninstruc.2016.12.006
Andrade, H. L., & Heritage, M. (2018). Using formative assessment to enhance learning, achievement, and academic self-regulation. Routledge. https://doi.org/10.4324/9781315623856
Angelone, A. M., Galassi, A., & Vittorini, P. (2022). Lessons learned about the application of adaptive testing in several first-year university courses. International Journal of Learning Technology, 17(1), 3–26. https://doi.org/10.1504/IJLT.2022.123696
Antoniou, P., & James, M. (2014). Exploring formative assessment in primary school classrooms: Developing a framework of actions and strategies. Educational Assessment, Evaluation and Accountability, 26(2), 153–176. https://doi.org/10.1007/s11092-013-9188-4
Bagheri Nevisi, R., & Hosseinpur Mohammad, R. (2022). Task-based speaking assessment in an EFL academic context: A case of summative and formative assessment. Research in English Language Pedagogy, 10(2), 256–276.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678
Bennett, R. E., & Gitomer, D. H. (2009). Transforming K-12 assessment: Integrating accountability testing, formative assessment and professional support. In C. Wyatt-Smith & J. J. Cumming (Eds.). Educational Assessment in the 21st Century (pp. 43–62). Springer. https://doi.org/10.1007/978-1-4020-9964-9_3
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practices, 30(1), 3–12. https://doi.org/10.1111/j.1745-3992.2010.00195.x
Brown, G. T. L. (2004). Teachers’ conceptions of assessment: Implications for policy and professional development. Assessment in Education: Principles Policy & Practice, 11, 301–318. https://doi.org/10.1080/0969594042000304609
Bulut, O., Cormier, D. C., & Shin, J. (2020). An intelligent recommender system for personalized test administration scheduling with computerized formative assessments. Frontiers in Education, 5, 572612. https://doi.org/10.3389/feduc.2020.572612
Cagasan, L., Care, E., Robertson, P., & Luo, R. (2020). Developing a formative assessment protocol to examine formative assessment practices in the Philippines. Educational Assessment, 25(4), 259–275. https://doi.org/10.1080/10627197.2020.1766960
Campbell, C., Murphy, J. A., & Holt, J. K. (2002). Psychometric analysis of an assessment literacy instrument: Applicability to preservice teachers. In Annual meeting of the mid-western educational research association. Columbus
Cham, C. Y., Chow, K. W., & Lei, C. U. (2022). Self-proctored mechanism for online high-stake assessments in university courses. International Journal of Learning Technology, 17(2), 172–187. https://doi.org/10.1504/IJLT.2022.10049981
Charman, D. (2013). Issues and impacts of using computer-based assessments (CBAs) for formative assessment. In Computer-assisted assessment in higher education (pp. 85–93). Routledge.
Choi, Y., & McClenen, C. (2020). Development of adaptive formative assessment system using computerized adaptive testing and dynamic Bayesian networks. Applied Sciences, 10(22), 8196. https://doi.org/10.3390/app10228196
Chrysafiadi, K., Troussas, C., & Virvou, M. (2022). Personalised instructional feedback in a mobile-assisted language learning application using fuzzy reasoning. International Journal of Learning Technology, 17(1), 53–76. https://doi.org/10.1504/IJLT.2022.10048511
Cizek, G. J., Andrade, H. L., & Bennett, R. E. (2019). Formative assessment: History, definition, and progress. In Handbook of formative assessment in the disciplines (pp. 3–19). Routledge.
Coombe, C., Vafadar, H., & Mohebbi, H. (2020). Language assessment literacy: What do we need to learn, unlearn, and relearn? Language Testing in Asia, 10(3), 2–16. https://doi.org/10.1186/s40468-020-00101-6
Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43–56. https://doi.org/10.1016/j.asw.2016.03.001
Danielson, C. (2013). The framework for teaching evaluation instrument. The Danielson Group.
De Simone, J. J. (2020). The roles of collaborative professional development, self-efficacy, and positive affect in encouraging educator data use to aid student learning. Teacher Development, 24(4), 443–465. https://doi.org/10.1080/13664530.2020.1780302
Elwood, J. (2006). Formative assessment: Possibilities, boundaries and limitations. Assessment in Education: Principles Policy & Practice, 13(2), 215–232. https://doi.org/10.1080/09695940600708653
Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage.
Fukuda, S. T., Lander, B. W., & Pope, C. J. (2022). Formative assessment for learning how to learn: Exploring university student learning experiences. RELC Journal, 53(1), 118–133. https://doi.org/10.1177/0033688220925927
Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132. https://doi.org/10.1080/15434303.2011.642041
Furtak, E. M., Kiemer, K., Circi, R. K., Swanson, R., de León, V., Morrison, D., & Heredia, S. C. (2016). Teachers’ formative assessment abilities and their relationship to student learning: Findings from a four-year intervention study. Instructional Science, 44(3), 267–291. https://doi.org/10.1007/s11251-016-9371-3
Ghazizadeh, F., & Motallebzadeh, K. (2017). The impact of diagnostic formative assessment on listening comprehension ability and self-regulation. International Journal of Language Testing, 7(2), 178–194.
Gierl, M., Bulut, O., & Zhang, X. (2018). Using computerized formative testing to support personalized learning in higher education: An application of two assessment technologies. In Digital technologies and instructional design for personalized learning (pp. 99–119). IGI Global. https://doi.org/10.4018/978-1-5225-3940-7.ch005
Gore, J. M., Miller, A., Fray, L., Harris, J., & Prieto, E. (2021). Improving student achievement through professional development: Results from a randomised controlled trial of Quality Teaching Rounds. Teaching and Teacher Education, 101, 103297. https://doi.org/10.1016/j.tate.2021.103297
Gotwals, A. W., & Cisterna, D. (2022). Formative assessment practice progressions for teacher preparation: A framework and illustrative case. Teaching and Teacher Education, 110, 103601. https://doi.org/10.1016/j.tate.2021.103601
Graham, S., Hebert, M., & Harris, K. R. (2015). Formative assessment and writing: A meta-analysis. The Elementary School Journal, 115(4), 523–547. https://doi.org/10.1086/681947
Granić, A. (2008). Intelligent interfaces for technology-enhanced learning. In Advances in Human-Computer Interaction (pp. 143–160). I-Tech Education and Publishing KG.
Holsti, O. R. (1969). Content analysis for the social sciences and humanities. Addison-Wesley. https://doi.org/10.3390/su13147665
Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687
Husain, F. N. (2021). Digital assessment literacy: The need of online assessment literacy and online assessment literate educators. International Education Studies, 14(10). https://doi.org/10.5539/ies.v14n10p65
Karakaya, S. P. Y., & Alparslan, Z. N. (2022). Sample size in reliability studies: A practical guide based on Cronbach’s Alpha. Psychiatry and Behavioral Sciences, 12(3), 150. https://doi.org/10.5455/PBS.20220127074618
Leenknecht, M., Wijnia, L., Köhlen, M., Fryer, L., Rikers, R., & Loyens, S. (2021). Formative assessment as practice: The role of students’ motivation. Assessment & Evaluation in Higher Education, 46(2), 236–255. https://doi.org/10.1080/02602938.2020.1765228
Looney, A., Cummuning, J., van Der Kleij, F., & Haris, K. (2018). Reconceptualizing the role of teachers as assessors: Teacher assessment identity. Assessment in Education: Principle, Policy & Practice, 25(5), 442–467. https://doi.org/10.1080/0969594X.2016.1268090
Lu, X. (2022). Second language Chinese computer-based writing by learners with alphabetic first languages: Writing behaviors, second language proficiency, genre, and text quality. Language Learning, 72(1), 45–86. https://doi.org/10.1111/lang.12469
Mandinach, E. B., & Gummer, E. S. (2016). Data literacy for teachers: Making it count in teacher preparation and practice. Teachers College Press.
Marzano, R. J. (2013). The Marzano teacher evaluation model. Englewood, CO: Marzano Research Laboratory.
Mayo, S. T. (1967). Pre-service preparation of teachers in educational measurement. US Department of Health, Education and Welfare.
McNeil, L. (2018). Understanding and addressing the challenges of learning computer-mediated dynamic assessment: A teacher education study. Language Teaching Research, 22(3), 289–309. https://doi.org/10.1177/1362168816668675
Mertler, C.A., & Campbell, C. (2005). Measuring teachers’ knowledge & application of classroom assessment concepts: Development of the assessment literacy inventory. In Annual meeting of the American Educational Research Association, Montreal.
Nassaji, H. (2020). Good Qualitative Research. Language Teaching Research, 24(4), 427–431. https://doi.org/10.1177/1362168820941288
Oo, C. Z., Alonzo, D., & Davison, C. (2021). Pre-service teachers’ decision-making and classroom assessment practices. In Frontiers in Education (Vol. 6, p. 628100). https://doi.org/10.3389/feduc.2021.628100
Pallant, J. (2020). SPSS survival manual: A step by step guide to data analysis using IBM SPSS. McGraw-Hill Education (UK). https://doi.org/10.4324/9781003117452
Pastore, S., & Andrade, H. L. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128–138. https://doi.org/10.1016/j.tate.2019.05.003
Patra, I., Alazemi, A., Al-Jamal, D., & Gheisari, A. (2022). The effectiveness of teachers’ written and verbal corrective feedback (CF) during formative assessment (FA) on male language learners’ academic anxiety (AA), academic performance (AP), and attitude toward learning (ATL). Language Testing in Asia, 12(1), 1–21. https://doi.org/10.1186/s40468-022-00169-2
Perry, K., Meissel, K., & Hill, M. F. (2022). Rebooting assessment. Exploring the challenges and benefits of shifting from pen-and-paper to computer in summative assessment. Educational Research Review, 100451. https://doi.org/10.1016/j.edurev.2022.100451
Plake, B. S., Impara, J. C., & Fager, J. J. (1993). Assessment competencies of teachers: A national survey. Educational Measurement: Issues and Practice, 12(4), 10–12. https://doi.org/10.1111/j.1745-3992.1993.tb00548.x
Popham, W. J. (2006). Needed: A dose of assessment literacy. Educational Leadership, 63(6), 84–85.
Popham, W. J. (2011). Assessment literacy overlooked: A teacher educator’s confession. The Teacher Educator, 46(4), 265–273. https://doi.org/10.1080/08878730.2011.605048
Rauf, M., & McCallum, L. (2020). Language assessment literacy: Task analysis in Saudi universities. In L. McCallum & C. Coombe (Eds.), The assessment of L2 written English across the MENA region (pp. 13–41): Palgrave Macmillan. https://doi.org/10.1007/978-3-030-53254-3_2
Remesal, A. (2007). Educational reform and primary and secondary teachers’ conceptions of assessment: The Spanish instance, building upon Black and Wiliam. The Curriculum Journal, 18(1), 27–38. https://doi.org/10.1080/09585170701292133
Schildkamp, K., van der Kleij, F. M., Heitink, M. C., Kippers, W. B., & Veldkamp, B. P. (2020). Formative assessment: A systematic review of critical teacher prerequisites for classroom practice. International Journal of Educational Research, 103, 101602. https://doi.org/10.1016/j.ijer.2020.101602
Schneider, C., & Bodensohn, R. (2017). Student teachers’ appraisal of the importance of assessment in teacher education and self-reports on the development of assessment competence. Assessment in Education: Principles, Policy & Practice, 24(2), 127–146. https://doi.org/10.1080/0969594X.2017.1293002
See, B. H., Gorard, S., Lu, B., Dong, L., & Siddiqui, N. (2021). Is technology always helpful? A critical review of the impact on learning outcomes of education technology in supporting formative assessment in schools. Research Papers in Education, 1–33. https://doi.org/10.1080/02671522.2021.1907778
Shin, J., Chen, F., Lu, C., & Bulut, O. (2022). Analyzing students’ performance in computerized formative assessments to optimize teachers’ test administration decisions using deep learning frameworks. Journal of Computers in Education, 9(1), 71–91. https://doi.org/10.1007/s40692-021-00196-7
Shirley, M. L., & Irving, K. E. (2015). Connected classroom technology facilitates multiple components of formative assessment practice. Journal of Science Education and Technology, 24(1), 56–68. https://doi.org/10.1007/s10956-014-9520-x
Soh, K. C., & Zhang, L. (2017). The development and validation of a teacher assessment literacy scale: A trail report. Journal of Linguistics and Language Teaching, 8(1), 91–116.
Stiggins, R. (1991). Relevant classroom assessment training for teachers. Educational Measurement, 10(1), 7–12.
Stiggins, R. J. (2017). The perfect assessment system. ASCD.
Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics. Pearson.
Tajeddin, Z., Khatib, M., & Mahdavi, M. (2022). Critical language assessment literacy of EFL teachers: Scale construction and validation. Language Testing, 39(4), 649–678. https://doi.org/10.1177/02655322211057040
Teng, L. S. (2022). Explicit strategy-based instruction in L2 writing contexts: A perspective of self-regulated learning and formative assessment. Assessing Writing, 53, 100645. https://doi.org/10.1016/j.asw.2022.100645
Tomasine, J. S. (2022). Documenting oral feedback discourse during formal formative reading assessment with an emergent bilingual student. Classroom Discourse, 1–18. https://doi.org/10.1080/19463014.2022.2090976
Torrance, H. (2012). Formative assessment at the crossroads: Conformative, deformative and transformative assessment. Oxford Review of Education, 38(3), 323–342. https://doi.org/10.1080/03054985.2012.689693
Tsagari, D., & Vogt, K. (2017). Assessment literacy of foreign language teachers around Europe: Research, challenges and future prospects. Papers in Language Testing and Assessment, 6(1), 18–40. https://doi.org/10.1080/15434303.2014.960046
Wayman, J. C., Jimerson, J. B., & Cho, V. (2012). Organizational considerations in establishing the data-informed district. School Effectiveness and School Improvement, 23(2), 159–178. https://doi.org/10.1080/09243453.2011.652124
Webb, M., Gibson, D., & Forkosh-Baruch, A. (2013). Challenges for information technology supporting educational assessment. Journal of Computer Assisted Learning, 29(5), 451–462. https://doi.org/10.1111/jcal.12033
Will, K. K., McConnell, S. R., Elmquist, M., Lease, E. M., & Wackerle-Hollman, A. (2019). Meeting in the middle: Future directions for researchers to support educators’ assessment literacy and data-based decision making. In Frontiers in Education (Vol. 4, p. 106). Frontiers Media SA. https://doi.org/10.3389/feduc.2019.00106
Wylie, E. C. (2020). Observing formative assessment practice: Learning lessons through validation. Educational Assessment, 25(4), 251–258. https://doi.org/10.1080/10627197.2020.1766955
Wylie, E. C., & Lyon, C. J. (2015). The fidelity of formative assessment implementation: Issues of breadth and quality. Assessment in Education: Principles, Policy & Practice, 22(1), 140–160. https://doi.org/10.1080/0969594X.2014.990416
Xuan, Q., Cheung, A., & Sun, D. (2022). The effectiveness of formative assessment for enhancing reading achievement in K-12 classrooms: A meta-analysis. Frontiers in psychology, 13. https://doi.org/10.3389/fpsyg.2022.990196
Yan, Z., & Brown, G. T. L. (2021). Assessment for learning in the Hong Kong assessment reform: A case of policy borrowing. Studies in Educational Evaluation, 68. https://doi.org/10.1016/j.stueduc.2021.100985
Yan, Z., & Cheng, E. C. K. (2015). Primary teachers’ attitudes, intentions and practices regarding formative assessment. Teaching and Teacher Education, 45, 128–136. https://doi.org/10.1016/j.tate.2014.10.002
Yan, Z., & Chiu, M. M. (2022). The relationship between formative assessment and reading achievement: A multilevel analysis of students in 19 countries/regions. British Educational Research Journal. https://doi.org/10.1002/berj.3837
Yan, Z., & Pastore, S. (2022). Are teachers literate in formative assessment? The development and validation of the Teacher Formative Assessment Literacy Scale. Studies in Educational Evaluation, 74, 101183. https://doi.org/10.1016/j.stueduc.2022.101183
Yildirim-Erbasli, S. N., & Bulut, O. (2022). Designing predictive models for early prediction of students’ test-taking engagement in computerized formative assessments. Journal of Applied Testing Technology, 22(2). Retrieved from http://jattjournal.net/index.php/atp/article/view/167548.
Zhang, Z., & Burry-Stock, J. (1997). Assessment practices inventory: A multivariate analysis of teachers’ perceived assessment competency. [Paper presentation]. Annual meeting of the National Council on Measurement in Education, Chicago, IL.
Acknowledgements
The authors acknowledge all the editors who reviewed and edited the paper.
Funding
The material has no funding.
Author information
Authors and Affiliations
Contributions
Golnoush Haddadian had the role of conceptualizing the study, reviewing the relevant literature, engaging with participants, and overseeing the experimental data collection and the results and discussion. She was the major contributor in drafting most sections of the manuscript. Sadaf Radmanesh contributed by reviewing the relevant literature and reviewing sections of the manuscript related to statistical analyses. Nooshin Haddadian reviewed and conducted the statistical analyses. All authors contributed to the results and discussion and read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1. Semi-structured interview questions
-
1)
To what extent are you familiar with CFA-related theories? Please elaborate on the main educational aims of CFA.
-
2)
To what extent are you familiar with designing and administering tests in CFA domain?
-
3)
How effective do you think CFA is in terms of its contribution to the learning outcomes and your instructional practice?
-
4)
What issues do you consider when designing and administering CFA?
-
5)
What learner-related factors do you consider when designing tests for CFA purposes and providing learners with feedback in CFA?
-
6)
How do you see your role as an assessor in the domain of CFA?
-
7)
What other issues and factors are personally of relevance to you in the domain of CFA?
Appendix 2. CFAL questionnaire items and components
Practical component
-
1)
I use different assessment tools to provide learners with different types of feedback.
-
2)
I instruct learners how to use the tests and assessment systems used in CFA to achieve course objectives.
-
3)
I help learners in using computer-generated feedback information to promote their learning.
-
4)
I help learners in identifying their language learning needs based on the computer-generated feedback information.
-
5)
I encourage learners to draw on computer-generated feedback for prospective learning tasks.
-
6)
I instruct learners how to identify the assessment criteria in CFA.
Theoretical component
-
1)
I know the theories underlying CFA for language teaching and learning purposes.
-
2)
I understand that CAF can provide diverse data types which can be used to enhance language learning outcomes.
-
3)
I can explain how CFA contributes to the identification of learners’ learning needs.
-
4)
I understand how CFA assists me in delivering instructional practices in alignment with course objectives.
-
5)
I know how CFA helps me deliver personalized instruction to foster learning for each individual learner.
-
6)
I am familiar different assessment tools and methods of CFA for different language skills and components.
-
7)
I understand how CFA can assist learners in regulating their learning based on the feedback they receive.
Socio-affective component
-
1)
I am aware of the anxiety some learners can experience during CFA.
-
2)
I know that learners’ language learning motivation can be affected via CFA experiences.
-
3)
I am conscious of how learners’ attitudes, beliefs, and values influence their CFA experiences.
-
4)
I attend to the role of learners’ agency, voices, and preferences when designing and administering CFA tasks.
-
5)
I acknowledge individual learners’ emotions to deliver personalized CFA.
Critical component
-
1)
I am aware of the future consequences CFA can have on learners’ learning.
-
2)
I recognize the impacts of the use of CFA on developing a CFA assessment culture in different language institutes in my country.
-
3)
I attend to fairness when choosing to use or designing different computerized assessment tools or methods of assessment.
-
4)
I know how the national assessment policy in my country exerts impacts on the implementation of CFA.
-
5)
I am conscious on how state educational ideologies can impact the use of CFA for language learning and teaching purposes.
Identity-related component
-
1)
I understand how my role as an assessor in CFA can impact my use of CFA.
-
2)
I am aware of my own agency and voice as an assessor in designing or choosing CFA tools and methods.
-
3)
I perceive that my own attitudes and belief as an assessor and teacher can influence my implementation of CFA.
-
4)
I know how my knowledge of CFA can promote or demote the effective use of CFA methods by my colleagues.
Developmental component
-
1)
I am aware that developing my theoretical knowledge of CFA can enhance my implementation of CFA.
-
2)
I acknowledge the significance of developing my computer literacy in promoting my professionalism in regard to CFA.
-
3)
I know that participating in discussion-based workshops can help me develop my current perspectives towards CFA.
-
4)
I think that attending practical workshops can hone my skills in designing and administering CFA tests.
Glossary
- Assessment Literacy (AL)
-
AL is defined as an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions.
- Computerized Formative Assessment (CFA)
-
CFA is characterized as assessment methods deploying computers to enhance the management and implementation of instructional assessment.
- Computerized Formative Assessment Literacy (CFAL)
-
CFAL is defined as the tripartite of teachers’ knowledge, skills, and principles of formative assessment to deliver instructional assessment via computers. In this study, CFAL was conceptualized as a construct encompassing six factors including practical, theoretical, socio-affective, critical, identity-related, and developmental.
- English as a Foreign Language (EFL)
-
EFL refers to the status of English being taught and/or learned in a country where English is not an official or native language.
- English as a Second Language (ESL)
-
ESL refers to the status of English being taught and/or learned in a country where English is an official or native language.
- Exploratory Factor Analysis (EFA)
-
EFA is a statistical procedure used to explore the components embedded in a construct.
- Formative Assessment (FA)
-
FA features a set of activities undertaken by teachers and learners to gather information to be drawn upon as feedback to make modifications on teachers’ instructional practices and adjust learners’ activities to provide assistance in determining the learning gaps, offering potentials for scaffolded learning, and guiding future instructional practices.
- Formative Assessment Literacy (FAL)
-
FAL is characterized as the teachers’ repertoire of knowledge, skills, and principles to make adaptations on their assessment-based instructional practices in accordance with learners’ needs to provide attuned feedback.
- Second Language Acquisition (SLA)
-
SLA refers to the process of acquiring a language other than one’s first language.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Haddadian, G., Radmanesh, S. & Haddadian, N. Construction and validation of a Computerized Formative Assessment Literacy (CFAL) questionnaire for language teachers: an exploratory sequential mixed-methods investigation. Lang Test Asia 14, 33 (2024). https://doi.org/10.1186/s40468-024-00303-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40468-024-00303-2