- Open Access
Language assessment literacy: an uncharted area for the English language teachers in Bangladesh
Language Testing in Asia volume 9, Article number: 1 (2019)
Language assessment literacy (LAL) is a critical field for researchers, scholars, or anyone interested in improving the language teaching environment. Understanding the basics of testing and the ability to perform testing-related activities becomes more significant in test-oriented countries. As such, in the extremely exam-oriented milieu of Bangladesh, giving tests and preparing students for high-stakes tests are the two core tasks performed by language teachers. English teachers’ readiness and ability to perform various test-related tasks determine the quality of English education in the country. In this regard, earlier studies have investigated various factors related to English language teaching. However, the assessment literacy of teachers has rarely been investigated within the context of Bangladeshi language teaching. There is no publication or broader research to understand how LAL operates in English teachers in the country. Considering the test-oriented nature of Bangladesh, it is essential to explore if the LAL of language teachers is benefitting classroom teaching and learning. Hence, this research aims to examine the nature and functionality of LAL among English teachers in Bangladesh. The study focused on two central concerns: first, whether the English teachers in the country are academically and professionally ready to perform various testing tasks; and second, how the teachers perceive LAL in their teaching practices. Semi-structured interviews were used as the data collection method for this qualitative study. The results provided insights into how the inadequate academic and professional testing background of teachers hindered their performance in conducting assessment-related tasks and contributed to their limitations in the use of assessments to improve teaching. Based on the findings, the article concludes with suggestions that can be implemented to develop language assessment awareness of English teachers in Bangladesh.
Popham (2004) once labeled teachers’ lack of appropriate training in assessment as “professional suicide” (p. 82). Recognizing the importance of assessment literacy, language testing researchers, and other key stakeholders have been continuously promulgating the idea of language assessment literacy (LAL) in many parts of the world (Inbar-Lourie, 2008, 2013a; Malone, 2008). In Western educational settings, the assessment literacy of teachers has received attention in educational policies and research since the early 1990s (Gotch & French, 2014; Plake et al., 1993; Popham, 2013; Stiggins, 2004). In the last decade, some studies investigated various aspects of LAL, for example, how it is defined or conceptualized and how it can be developed further (Lam, 2015). However, in the context of Bangladesh, assessment literacy is still an underexplored area, especially for classroom English language teachers. Although English teachers are responsible for preparing the questions for the internal examinations that are held at the school or preparing the students for public exams, it has never been considered essential for classroom English teachers to develop the required assessment literacy. On the other hand, English is taught as a compulsory subject at school level in Bangladesh so that students become proficient users of English in real life (National Curriculum, 2012). In a different context, López and Bernal (2009) pointed out that teachers with assessment training used tests to improve teaching and learning whereas teachers without assessment training used tests to obtain grades. The inadequacy of assessment knowledge may “cripple the quality of education” (Popham, 2009, p. 43). Then, what is the situation for English teachers in Bangladesh? There is no publication that indicates or investigates the assessment literacy of English teachers in the context of Bangladesh. This paper, therefore, investigates to what extent English language teachers at the secondary level possess assessment literacy and how they perceive LAL.
Brief review of LAL
Generally, the knowledge, principles, and skills of language testing are known as LAL (Davies, 2008; Fulcher, 2012; Malone, 2008). Assessment literacy largely has been defined as “teachers’ understanding of assessment processes as well as their capacities to design assessment tasks, develop adequate criteria for making valid judgments on the quality of students’ performances, and understand and act upon the information that is collected through assessment” (Hay & Penney, 2013, pp. 69–70). However, the constructed language in LAL is different from its universal form, testing literacy (Giraldo, 2018). Hence, LAL incorporates knowledge of language, principles, and skills of language testing (Davies, 2008; Fulcher, 2012; Inbar-Lourie, 2013b). For this article, LAL is defined as the familiarity of language teachers with the basic ideas of testing, the ability to apply those ideas in classroom instruction, and the capacity to perform language assessment-related tasks (Inbar-Lourie, 2008; Melone, 2013; Taylor, 2009).
LAL is essential for language teachers as well as other stakeholders in understanding the scope of this field (Taylor, 2009). However, Scarino (2013) argues that language teachers are the most important of all the stakeholders because they are the direct test users. Unfortunately, some experienced teachers do not possess adequate assessment knowledge (Crusana et al., 2016). A study by Tsagari and Vogt (2017) found that the sample teachers were not prepared to conduct assessment-related tasks since they had not received enough academic support from the teacher educational programs. As a result, they embraced the assessment practices of their mentors or colleagues. In this regard, Tsagari and Vogt noted that practices such as “test as you were tested” or “learning on the job” restrain teacher development and create the possibility of not implementing “published knowledge” (p. 54). This understanding is partially supported by Melone’s (2013) study conducted in the context of foreign language in the USA, which found that language instructors were keen on developing the ability to use assessment tools, unlike their counterparts, who were language testers focused on accurately understanding the theoretical aspects of assessment. Although in a different context, Jeong’s (2013) study concluded that instructors with a non-testing background place less emphasis on test theory compared to instructors with a testing background. A necessary implication of Jeong’s research is that the teaching outcome of the courses would be different depending on the testing or non-testing background of the instructors. These studies point out the importance of assessment-related training for language instructors.
Training may be helpful in developing the assessment literacy of language teachers. To equip teachers to be assessment literate in their classroom instructions, appropriate teacher training on assessment is required (Jeong, 2013). All pre-service and in-service English language training should create LAL opportunities, which would raise the standard of English language teaching by empowering the teachers with the required assessment knowledge (Herrera & Macías, 2015). In a recent article, Giraldo (2018) stated that language teachers need to be able to deliver high-quality assessments for the development of students’ language proficiency, which is only possible if they possess the knowledge, skills, and practices of language testing. This sentiment is echoed in a study by Koh et al. (2018) on Chinese language teachers in Singapore, which showed that participating teachers were unaware of the learning goals associated with the test items before participating in a professional development program. The researchers in that study believed that the “capacity to identify and recognize higher-order learning goals is expected to yield a significant improvement in the quality of the assessment tasks designed” by the teachers (p. 274). In a similar vein, one of the recent studies conducted in an Iranian context suggested that raising teachers’ awareness of assessment literacy would enable them to evaluate the performance of learners (Esfandiari & Nouri, 2016). The same study reported that teachers’ teaching methods, their way of assessing students, and their objectives varied widely according to the amount of training they had received in various forms. The way in which teachers approach and value assessments largely depends on the assessment identities of the teachers as well (Looney et al., 2017). Looney et al. (2017) claimed that the teachers’ knowledge is dependent on their prior experiences, beliefs, and feelings about assessment in putting that knowledge into practice. Acknowledging these factors, DeLuca, LaPointe-McEwan, and Luhanga (2016) advocated for differentiated and targeted professional training to develop the assessment literacies of teachers. DeLuca et al. (2016) and Looney et al. (2017) recommended tailor-made training programs to cater to the individual needs of teachers in performing assessment tasks.
The abovementioned perspectives have led to the question of whether teachers have enough opportunities and scope to use the assessment knowledge earned from the training courses in real classrooms. The pressure of examination might impede opportunities to use assessment knowledge in practice (Koh et al., 2018). However, in the context of high-stakes exams, assessment literacy plays a crucial role in equipping teachers with the ability to evaluate standardized tests critically, so that they avoid accepting these tests without questioning their quality, as posited by Vogt and Tsagari (2014).
A brief literature review highlights two things: First, having adequate knowledge of language assessment produces better language instruction. Second, training plays a significant role in equipping teachers with the necessary language assessment skills.
Background of the context
English language teaching (ELT) always has been an area of challenge in Bangladesh regardless of various attempts at curriculum reform (Rahman & Pandian, 2018). Replacing the grammar translation method (GTM), communicative language teaching (CLT) was introduced in the 1990s for teaching the English language in Bangladesh with the goal of improving English teaching and learning. The first CLT-based examination at the secondary level took place in 2001. Even though a CLT-based curriculum has been in operation in this country for the last two decades, the communicative purpose of English teaching and learning apparently was never achieved. Instead, the standards of English have degraded over the last few years rather than showing evidence of improvement. Some studies (Hamid, 2011; Islam, 2015; Karim, 2004; Rahman, 2015; Selim & Mahboob, 2001) have suggested a strong presence of teaching to the test in teachers’ classroom practices. These findings are consequential since Bangladesh is one of those extremely test-oriented countries, in which teachers prepare students to take several high-stakes tests during their school years. Classroom English teachers are responsible for designing various internal examinations, and a few of them even serve on examination boards as question setters. Consequently, it is expected that teachers’ classroom instruction would be largely molded by their assessment practices. However, there is no study that indicates whether academic knowledge or training of English teachers is enough to enable them to perform assessment-related activities; further, no study shows how they perceive assessment literacy in their practices.
Research objective, setting, and questions
The central aim of the study was to understand and gauge the assessment literacy of English teachers in Bangladesh. First, a concise literature review on assessment literacy was conducted to identify current developments in the field. Google Scholar, ERIC, and journals on educational assessment and language testing were searched using phrases such as “assessment literacy,” “test literacy,” and “language assessment literacy.” Since there was no prior research or publication on LAL in the context of Bangladesh, this paper attempts to outline the assessment literacy status of English teachers. Secondary education was chosen for the research context because in Bangladesh, the secondary school leaving examination is the biggest and most prominent exam that students take at the end of their secondary education (Sultana, 2018). This examination is known as the Secondary School Certificate (SSC) examination. As an insider and a researcher, I understand that teachers at this educational level are more focused on assessments as they prepare students for the most important school leaving public examination. The knowledge, understanding, and perspective on language testing of these teachers are expected to influence their classroom instruction as well as the learning of the students. Therefore, this study focused on English teachers teaching at the SSC level in Bangladesh.
The study addressed two central questions:
To what extent are English teachers academically and professionally ready to execute assessment-related tasks?
To what extent, and how, do English teachers perceive LAL in their teaching practices?
To answer these questions, the following conceptual framework was developed.
Conceptual framework and interview protocol
The design of the study was qualitative to gain a deeper understanding of the context and the participants. Giraldo’s (2018) eight dimensions of LAL for language teachers were used as guiding principles in the interview protocol design, data collection, and analysis. The eight dimensions (Giraldo, 2018, pp. 188–190) are categorized under the LAL components of knowledge, skills, and practice (Table 1).
Each of the eight dimensions has a list of descriptors specifying the expectations from language teachers. Although some descriptors for a dimension may overlap with the descriptors for other dimensions, they are helpful in identifying what a language teacher with LAL should know and do. The strengths of the dimensions lie in their applicability in both qualitative and quantitative studies.
The purpose of a qualitative interview is to obtain the interpretations and perceptions of the respondents rather than using them only as a source of fact retrieval (Warren, 2001). Thus, the above-described dimensions were used in developing the interview guide for this qualitative research. Details of the descriptors helped the researcher to generate ideas related to understanding and opportunities that LAL could provide. The interview guide was broadly divided into two parts. The first section was designed to investigate the participants’ demographic information, such as teaching biographies and experience. The aim of the second section was to understand the teacher’s knowledge, skills, and principles with respect to LAL in their professional practices (Table 2).
Participants and data collection procedure
Ten English teachers (see Table 3) were randomly selected from five schools to participate in this study. All participants were teachers at the secondary level in Dhaka, the capital city of Bangladesh. All participants were female, except for one (T2), and they all had teaching experience ranging from 4 years to 15 years. All 10 teachers in the sample possessed postgraduate education and teaching qualifications (completion of a 1-year Bachelor of Education [B.Ed.] program). In addition, these teachers had attended a limited number of workshops related to teaching. These teacher participants had experience in preparing internal examinations, and one teacher (T2) also had worked as the question setter at the examination board.
Invitation letters were sent to the English teachers of five randomly selected schools, and the first two teachers to express interest at each school were recruited as participants. Once the participants agreed, a combined letter of information and consent forms was provided to them to review and sign. The interview time and location were scheduled based on their convenience. The purpose of the interviews was to gain an in-depth understanding of the assessment literacy of these English language teachers. Each interview lasted approximately 45 min, and in some cases, follow-up interviews were also conducted to obtain additional clarifying information. The interviews were digitally recorded.
Data analysis procedures
All interviews were transcribed and read while listening to the audio recording. This exercise was particularly helpful in identifying elements of the recordings, such as a sarcastic tone, that were not apparent from the written transcript. The data analysis method was inductive thematic analysis (Braun and Clarke, 2006), so that the themes identified were strongly rooted in the data. First, each of the interview transcripts was reviewed to generate initial coding. Second, similar codes were clustered together to create common themes. Third, the codes and themes were reviewed in light of the research questions. Finally, after a thorough analysis of the transcripts, the final themes were created that corresponded to the research questions.
Results and discussion
Lack of academic knowledge and professional training
After analyzing the demographic information of the teachers, it was found that eight of the ten participants held degrees in English literature, one had BA and MA degrees in Economics, and just one teacher had an MA degree in English Language Teaching (ELT). Those who held degrees in English literature did not understand the approaches to English Language Teaching. They lacked any academic qualification or knowledge with respect to language teaching, let alone language testing. Except for one teacher who had served on an examination board as the question setter, the teachers did not have any formal in-service training on language testing or design.
There is one module dedicated to teaching the assessment of students’ learning in the B.Ed. English syllabus (Syllabus for Bachelor of Education (B.Ed.), 2017, pp. 45–46). However, the teachers in the sample stated that although B.Ed. course introduced the basic concepts (at the definition level) of testing, it did not teach in any detail. They recalled that the English courses of the B.Ed. program predominantly focused on teaching how to teach and placed little importance on assessment. The short in-service courses/workshops that these participants subsequently attended primarily focused on teaching methodology, lesson planning, and classroom management. The teacher who had experience working as a question setter for public examinations at the secondary level had attended a 1-day workshop on question setting. That teacher reported that 2018 was the first time the exam board had arranged a workshop for the question setters. The teacher acknowledged that this basic training had helped her in setting the internal examination questions for the school where she worked.
These sampled language teachers did not have adequate academic and professional training on language assessment. However, they were expected to be able to deliver language tests at their workplaces from the start of their careers. Since they did not receive any in-service training focused on language tests, their ability to understand the underlying issues of assessment is questionable. How can teachers who are not assessment literate be expected to produce language tests that assess or measure the abilities of students? As far as their knowledge of assessment, these sampled teachers apparently lacked adequate academic and professional training in language testing and assessment.
Expertise in designing language tests
Expertise as a language testing practitioner was developed based on suggestions from colleagues and experiential learning. Since teachers did not have previous training on designing language tests, all teachers in the sample population used previous internal and board questions as models and relied on professional suggestions from their senior colleagues. In this regard, one participant explained:
When I had to prepare the question papers for the internal examinations in the beginning days of my teaching career, I looked at the question papers of previous board examination to know the format. I kind of replicated the items using different texts, so it did not look the same. Otherwise, I asked my senior colleagues to give me their question papers to get some ideas. My senior colleagues did the same as well [T3].
In further discussions on this matter, this participant revealed that, since she did not know how to design a question in the early days of her teaching career, she used to copy the patterns of the questions from the sample questions. For example, in creating a gap-filling or matching question, the teacher simply changed the text of the original test items without understanding why the question was structured the way it was. Sometimes, she just changed the names or few minor details of the original question paper to develop a question paper for her students. In other instances, the teachers had to rely on their instincts about how to write and grade test items. The more complex problem was creating classroom tests, about which one participant stated. “I did not know the basics of question design, so I followed my instincts” [T6]. This participant added that she experienced a sense of uncertainty and a feeling of “not doing it right.” Due to the lack of education on language testing, teachers relied on trial and error. If something did not work, they simply did not repeat it. All respondents agreed that they developed a self-perceived LAL while teaching and doing various practical assessment tasks.
The following example illustrates what happens when there is no real academic and professional training on assessment. While conducting the interviews, it was found that most of the teachers did not know the difference between “cloze tests” and “regular gap-filling exercises.” Even though classrooms were not observed for this study, it could be deduced that while administering cloze tests in the classrooms, teachers might not be doing so correctly. While discussing their experience performing various assessment-related activities, it was found that none of the sampled teachers, except one (T2), knew the purpose behind setting multiple choice questions (MCQ) questions in the SSC English examination. On the other hand, the sample guideline uploaded on the website of the National Curriculum and Textbook Board (NCTB) of Bangladesh clearly states the purpose behind each of the items. In general, all these nine participants stated that MCQs were designed to test students’ ability to discover the correct answer. Further discussion on how they taught MCQs in classrooms found that their teaching methods were solely practice-oriented. None of the teachers, including T2, taught the basic skills of scanning, skimming, reading for gist, and inference when teaching MCQs for the SSC examination. They had earlier revealed that they learned question setting from the sample questions, but it appears that the teachers failed to grasp the essential concepts behind giving tests or creating items on examinations. Thus, classroom teaching might suffer due to the lack of LAL by teachers.
Perceptions and use of tests
The teachers claimed that they used various forms of tests to evaluate the performance of students. However, all those assessment tasks were somewhat traditional writing tests. They used tasks such as gap filling, cloze tests, MCQ, sentence completions, and matching, as well as occasional writing activities such as composing paragraphs, essays or letters. The participant teachers did not consider applying various forms of assessments in their teaching, for instance, portfolio, peer assessment, or any other form of assessment. In this regard, all the teacher participants at the five selected schools only used those approaches that were aligned with the public examinations and designed questions based on the sample public examination questions. The class tests (short quizzes) were aimed at the internal examinations, and the internal examinations were aimed at external public examinations. The following interview excerpt helps to highlight the LAL of the participant teachers:
Interviewer: Do you use any other forms of assessments to gauge the proficiency of the learners?
Teacher: I give them quiz and class tests.
Interviewer: Did you try any alternative assessments?
Teacher: They do not need them [T10].
When the participant stated that students did not need any other forms of assessment, she meant that alternative forms of assessment did not directly contribute to students’ preparation for the public English examination. Teachers were not aware of the need to use alternative assessments to evaluate the performance of students. This impression arises because the classroom teaching was exam-oriented. As an example, the instructors rarely reviewed or assessed speaking or listening skills in preparing the students for examination because these two skills were never tested. Along these lines, alternative forms of assessment did not fit the criteria of the exercises on the English public examination. Teachers, therefore, did not feel the need to go beyond the traditional route of giving tests. This situation leads to the question of how teachers perceive the purpose of assessment.
From the interviews, it was apparent that the teachers perceived the giving of tests as the way to assess the readiness of the students for public examinations. Comments from the teachers, for example, “tests are given to the students to rank them in the class” [T9], “the core purpose for giving the tests is not to improve teaching but to grade them” [T1], and “exams give us a sense about students’ preparation for the board exam and at the same time push them to study hard” [T4], illuminate the fact that this set of English teachers seemed not to perceive the purpose of language assessment as an evaluation of the language proficiency of students. They failed to appreciate the actual goal of teaching English at the secondary level, which is, according to the English National Curriculum (2012) of Bangladesh, to equip students with the ability to use English in real life. Teaching to the test was the mindset that was reflected when teachers did not realize the purpose of examination.
After a long discussion on how they used tests, a few study participants revealed that sometimes examination scripts pointed out areas of weakness that might need more practice. One of the teachers explained, “When I check the scripts, I identify the weak areas of the students. I practice those weak items in the classrooms so that my students do not repeat the same mistakes in the next examinations.” [T8] For the teachers, “giving feedback” on students’ development meant identifying the test items where students needed more practice so that they could do well on their next examinations. The interviews provided insight into the fact that teachers did not follow the curriculum goals of language teaching in their classroom instruction. Their classroom teaching was more test-oriented, since they only taught with the test in mind. The effect of the examination is evident in their teaching style of “teaching to the test,” which is likely to create a negative test effect on classroom teaching.
Awareness of language assessment
The discussion about the influence of the examination on instruction led to a discussion of the influence of the external public on classroom instruction. The interviews revealed that the sampled teachers never gave a critical thought to the effects of public examinations. When asked how they viewed the standard external examinations, one of the teachers replied,
Does it matter? Public examination is a public examination. It does not matter what I think, my duty is to prepare the students for the examination.
When asked how they interpreted the scores from the examinations, one of the participants replied,
Once the examination is taken, we do not have to worry about the students… we are done with them…but if the overall results were poor in one of the years, then we would prepare the students more rigorously for next year.
That teacher explained that if there was a poor score in one of the years, they would make the students do more practicing and give them extra practice tests. This conversation highlighted the fact that regular classroom teachers have not developed the ability to critique the strength of standardized tests or to use test scores to improve English classroom instruction. The SSC English examination substantially influenced their classroom instruction. They taught those items which were important for the examination and created the practice tests with similar patterns to the SSC English examination. None of the English teachers expressed any concern about the quality of the SSC examination. In Bangladesh, the examination is so important that, for teachers, it is the sole reason for teaching the students. The teachers obviously were socially trained to accept the public examination as it was and were not academically trained to break that social training. The actual purpose of teaching English, which is to develop the language proficiency of the students, is lost in the process of preparing the students for the external public examinations.
This lack of awareness is also reflected in the following excerpt:
Interviewer: Since you did not receive any professional training on designing language tests, do you think that getting training would be more effective?
Teacher: I think… I am okay… we have the samples, and I set the questions based on the sample or model questions [T2].
All the teachers, except for one (T6), expressed satisfaction with what they were doing within their language assessment roles. They seemed unaware of how receiving professional training on language assessment would benefit their teaching practices.
This study reports on an investigation conducted with 10 secondary school English teachers from five schools in Bangladesh. Owing to the small sample size, the results are not meant to be generalized to predict an overall picture of the country. However, they do highlight the extent to which language teachers at other schools, especially at low-performing schools and rural schools, might be deprived of assessment literacy. In addition, this study is expected to make a significant contribution to the field of assessment literacy by providing a research window into a context such as Bangladesh. The attempt to examine the language assessment literacy of Bangladeshi English teachers indicates a need to conduct further extensive research in this setting.
Although this small-scale research does not clearly establish that these 10 teacher participants lacked LAL, their views, opinions, and classroom practices surely raise questions about the quality of their knowledge, skills, and practices of language assessment. The results of the study provided insights into a few serious issues. However, all the concerns raised bear some relationship to the inadequacy of teacher assessment literacy.
All sampled teachers had teaching experience ranging from 4 to 15 years. This is likely to contribute to some extent to their understanding of language assessment. However, they neither had academic orientation on language testing and assessment nor received proper training on language assessment while they were teaching. Thus, the foundations of language assessments and the required skills to use assessment in teaching practice were loosely based on their experiential learning. The development of LAL in these teachers is akin to relying on folklore in the absence of a solid theoretical foundation. Approaches to LAL such as learning from colleagues or experiences impede the process of a teacher’s development (Tsagari & Vogt, 2017). The few examples provided in this study challenged the ability of the English language teachers to perform the assessment tasks that they had learned in the teaching profession. They did not get the opportunity to validate their perceptions and understandings about assessment through further training.
Consequently, the understanding of the language teachers about the purpose of assessments is somewhat limited. For them, the goals of assessment are closely tied to grading and test preparation, which automatically leads to the teaching to the test phenomenon, triggering negative washback (adverse effect of testing on teaching and learning) in English language teaching in the country. Negative washback occurs when teachers allow the test objectives to supersede the curriculum objectives in their teaching. Owing to inadequate assessment literacy, language teachers are unable to implement the curriculum goals for teaching English. By only preparing students for tests, English teachers fail to grasp the essential connections among the curriculum, classroom instruction, and examination. On the other hand, an assessment-literate teacher can improve classroom instruction by creating opportunities for students to learn high-order skills (Koh et al., 2018), which is unlikely in the context of the present study. Therefore, the lack of assessment literacy of English teachers could be one of the contributor factors linked to the proclaimed decreasing standards of English language teaching in Bangladesh.
In an extremely test-oriented country such as Bangladesh, it is expected that testing would influence teaching and learning. As part of this testing-driven education system, teachers are responsible for performing a range of assessment tasks. It is therefore vital to provide education on assessment and testing to advance English language teaching overall in the country. Until and unless teachers are educated so that they can appreciate the basics and purpose of evaluation and assessment, the quality of classroom instruction will not improve. To be knowledgeable and skilled in language testing and assessment, teachers need constant professional development opportunities.
Attending professional training in language assessment would equip teachers with the conceptualization of the dynamic and challenging nature of language assessment and with up-to-date assessment practices. An initial suggestion in this regard would be to include language testing as the core English module in the B.Ed. teaching certification course, so that prospective teachers are educated on the subject matter of language testing. Opportunities should be created so that English teachers can participate in continuous professional development courses to stay current in their field. Another suggestion, although it will require some time, would be to establish a language-testing body in the country to coordinate various forms of language assessment training for language teachers as well as testers. However, any professional training should take into account the background and experiences of the teachers (Fulcher, 2012); this supports Scarino's (2013) argument that LAL should be construed within a teacher’s interpretative frameworks, that is, the teacher’s teaching context, social perspectives, beliefs, and understandings should be acknowledged in discussing LAL. This interpretative framework could be best understood by Looney et al.’s (2017) teacher assessment identity framework that acknowledges the influence of background, beliefs, and feelings in constructing assessment ideas. This principle is important in the context of Bangladesh because it has a test-oriented culture and its teachers are socially conditioned to view tests only from a grading perspective. They are trained to consider tests as gatekeepers—purposely designed to grade and rank. Discussion about assessment literacy in Bangladesh should be based on the beliefs, values, education, background, and previous training of the teachers. Teachers not only need the knowledge, principles, and skills of language assessment, but they also need the required understanding to apply in their teaching environments, which are subject to numerous limitations.
Bachelor of Education
Communicative language teaching
English Language Teaching
Grammar Translation Method
Language assessment literacy
Multiple Choice Questions
Secondary School Certificate
Teaching of English to the Speakers of Other Languages
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
Crusana, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43–56. https://doi.org/10.1016/j.asw.2016.03.001.
Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327–347 https://doi.org/10.1177/0265532208090156.
DeLuca, C., LaPointe-McEwan, D., & Luhanga, U. (2016). Approaches to classroom assessment inventory: A new instrument to support teacher assessment literacy. Educational Assessment, 21(4), 248–266. https://doi.org/10.1080/10627197.2016.1236677.
Esfandiari, R., & Nouri, R. (2016). A mixed-methods, cross-sectional study of assessment literacy of Iranian University instructors: implications for teachers’ professional development. Iranian Journal of Applied Linguistics, 19(2), 115–154 https://doi.org/10.29252/ijal.19.2.115.
Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132 https://doi.org/10.1080/15434303.2011.642041.
Giraldo, F. (2018). Language assessment literacy: implications for language teachers. Profile: Issues in Teachers’ Professional Development, 20(1), 179–195 https://doi.org/10.15446/profile.v20n1.62089.
Gotch, C. M., & French, B. F. (2014). A systematic review of assessment literacy measures. Educational Measurement: Issues and Practice, 33, 14–18 https://doi.org/10.1111/emip.12030.
Hamid, M. O. (2011). Planning for failure: English and language policy and planning in Bangladesh. In J. A. Fishman & O. Garcia (Eds.), Handbook of language and ethnic identity-the success-failure continuum in language and ethnic identity efforts (Vol. 2, 2nd ed., pp. 192–203). New York: Oxford University Press.
Hay, P., & Penney, D. (2013). Assessment in physical education: a sociocultural perspective. London: Routledge.
Herrera, L., & Macías, D. (2015). A call for language assessment literacy in the education and development of teachers of English as a foreign language. Colombian Applied Linguistics Journal, 17(2), 302–312 https://doi.org/10.14483/udistrital.jour.calj.2015.2.a09.
Inbar-Lourie, O. (2008). Constructing a language assessment knowledge base: a focus on language assessment courses. Language Testing, 25(3), 385–402. https://doi.org/10.1177/0265532208090158.
Inbar-Lourie, O. (2013a). Guest editorial to the special issue on language assessment literacy. Language Testing, 30(3), 301–307 https://doi.org/10.1177/0265532213480126.
Inbar-Lourie, O. (2013b). Language assessment literacy. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics (pp. 2923–2931). Oxford: Blackwell.
Islam, S. M. A. (2015). Language policy and practice in secondary school contexts in Bangladesh; challenges to the Implementation of Language-in-Education Policy (Unpublished doctoral dissertation). Aalborg: Aalborg University.
Jeong, H. (2013). Defining assessment literacy: is it different for language testers and non-language testers? Language Testing, 30(3), 345–362. https://doi.org/10.1177/0265532213480334.
Karim, H. B. B. A. (2004). A study of teachers’ perceptions of factors affecting curriculum change (unpublished doctoral dissertation). Penang: Universiti Sains Malaysia.
Koh, K., Burke, L. E. C.-A., Luke, A., Gong, W., & Tan, C. (2018). Developing the assessment literacy of teachers in Chinese language classrooms: a focus on assessment task design. Language Teaching Research, 22(3), 264–288. https://doi.org/10.1177/1362168816684366.
Lam, R. (2015). Language assessment training in Hong Kong: implications for language assessment literacy. Language Testing, 32(2), 169–197 https://doi.org/10.1177/0265532214554321.
Looney, A., Cumming, J., Kleij, F. v. D., & Harris, K. (2017). Reconceptualising the role of teachers as assessors: teacher assessment identity. Assessment in Education: Principles, Policy & Practice, 1–27. https://doi.org/10.1080/0969594X.2016.1268090.
López, A., & Bernal, R. (2009). Language testing in Colombia: a call for more teacher education and teacher training in language assessment. Profile: Issues in Teachers’ Professional Development, 11(2), 55–70.
Malone, M. (2008). Training in language assessment. In E. Shohamy & N. Hornberger (Eds.), Encyclopedia of language and education: language testing and assessment (2nd ed., Vol. 7, pp. 225–233). New York: Springer. https://doi.org/10.1007/978-0-387-30424-3_178.
Melone, M. E. (2013). The essentials of assessment literacy: contrasts between testers and users. Language Testing, 30(3), 329–344. https://doi.org/10.1177/0265532213480129.
National Curriculum & Textbook Board. (2012). National curriculum. Dhaka: NCTB.
Plake, B., Impara, J., & Fager, J. (1993). Assessment competencies of teachers: a national survey. Educational Measurement: Issues and Practice, 12(4), 10–39. https://doi.org/10.1111/j.1745-3992.1993.tb00548.x.
Popham, W. J. (2004). Why assessment illiteracy is professional suicide. Educational Leadership, 62(1), 82–83.
Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory Into Practice, 48, 4–11.
Popham, W. J. (2013). Classroom assessment: what teachers need to know (7th ed.). Boston: Pearson.
Rahman, M. M., & Pandian, A. (2018). A critical investigation of English language teaching in Bangladesh. English Today, 1–7 https://doi.org/10.1017/S026607841700061X.
Rahman, M. S. (2015). Implementing CLT at higher secondary level in Bangladesh: a review of change management. Journal of Education and Practice, 6(2), 93–102.
Scarino, A. (2013). Language assessment literacy as self-awareness: understanding the role of interpretation in assessment and in teacher learning. Language Testing, 30(3), 309–327 https://doi.org/10.1177/0265532213480128.
Selim, A., & Mahboob, T. S. (2001). ELT and English language teachers of Bangladesh: a profile. In F. Alam, N. Zaman, & T. Ahmed (Eds.), Revisioning English in Bangladesh (pp. 141–151). Dhaka: The University Press Limited.
Stiggins, R. (2004). New assessment beliefs for a new school mission. Phi Delta Kappan, 86, 22–27 https://doi.org/10.1177/003172170408600106.
Sultana, N. (2018). Test review of the English public examination at the secondary level in Bangladesh. Language Testing in Asia, 8(16), 1–9.
Syllabus for Bachelor of Education (B.Ed.). (2017). Gazipur, Bangladesh: National University.
Taylor, L. (2009). Developing assessment literacy. Annual Review of Applied Linguistics, 29, 21–36 https://doi.org/10.1017/S0267190509090035.
Tsagari, D., & Vogt, K. (2017). Assessment literacy of foreign language teachers around Europe: research, challenges and future prospects. Papers in Language Testing and Assessment, 6(1), 41–63.
Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: findings of a European study. Language Assessment Quarterly, 11(4), 374–402 https://doi.org/10.1080/15434303.2014.960046.
Warren, C. A. B. (2001). Qualitative interviewing. In J. F. Gubrium & J. A. Holstein (Eds.), Handbook of interview research (pp. 83–102). Thousand Oaks, CA: Sage Publications, Inc.. https://doi.org/10.4135/9781412973588.
Availability of data and materials
The data that support the findings of this study are available on request from the corresponding author N. Sultana. The data are not publicly available because the interview data contains information that could compromise research participant’s privacy and consent. In the letter of information and consent form, it has been assured that none of my publication would reveal their identities.
The author declares that she has no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.