Skip to main content

Comparability of reading tasks in high-stakes English proficiency tests and university courses in Korea

Abstract

With the increased popularity of English-medium instruction (EMI) in higher education, many East Asian universities are using international English proficiency tests to make admissions and placement decisions. Since these tests were not originally designed for the EMI contexts, validity evidence is needed to support the use of these tests in this new context. To interpret performance on a test as representative of performance in a target language use (TLU) domain, this study investigated (1) the characteristics of English reading tasks in Korean EMI undergraduate and graduate courses, (2) the extent to which they are comparable to the characteristics of reading tasks on TOEFL iBT and IELTS, and (3) the extent to which students perceive EMI reading tasks and the test reading tasks to be comparable. Fifty-four undergraduate and graduate students in EMI content courses at a Korean university completed an online questionnaire. Analyses revealed that EMI reading tasks share several characteristics with USA/UK university reading tasks. Although EMI reading tasks had some key characteristics in common with TOEFL and IELTS reading tasks, the test tasks were much more limited in range. Finally, the extent to which students perceived EMI reading tasks and TOEFL/IELTS reading tasks comparable varied across academic areas.

Introduction

English-medium instruction (EMI) in higher education has been rapidly gaining popularity in East Asian countries. EMI is defined as “The use of the English language to teach academic subjects (other than English itself) in countries or jurisdictions where the first language of the majority of the population is not English” (Macaro et al. 2018, p. 37). Most leading universities in Korea have implemented EMI since the mid-2000s when the government actively promoted EMI in higher education to (1) to prepare students with English skills; (2) to attract international scholars and students; and (3) to encourage professors to participate in international academia (Byun et al. 2011). These universities use English proficiency test scores, such as TOEFL iBT or IELTS scores, as part of the admissions process, considering the scores as one of the indicators of students’ potential to succeed in their EMI coursesFootnote 1. Some universities use admitted students’ English proficiency test scores to decide whether a student needs to take additional English language coursesFootnote 2. The increase in EMI courses is likely to continue as the number of potential students drastically decreases due to the low birth rate in Korea (Park, 2021), which has made many universities attract international students to remain viable.

However, neither TOEFL iBT nor IELTS were originally designed for the East Asian contexts. The TOEFL iBT was designed for admissions decisions in North American universities (Jamieson et al. 2008). The IELTS Handbook states that IELTS was “designed to assess the language ability of candidates who need to study or work where English is the language of communication” (IELTS, 2007, p. 2) and is used by universities and employers in English-speaking countries such as the UK, the USA, and Australia. Universities that offer EMI courses in countries in which English is not the dominant language are different from universities in English-speaking countries. For example, in most EMI programs most instructors and students are not native speakers of English. Also, English is rarely used outside of the classroom. Most other activities related to students’ academic life are mostly done in the local languages. Finally, language tasks in EMI courses are potentially different from those in universities in English-speaking countries due to local language resources (Kim and Tatar, 2017; Kim et al. 2014). Therefore, East Asian universities’ using English proficiency tests which were designed for North American, UK, or Australian contexts require a separate validation process to investigate whether this is a valid use of the tests.

To interpret performance on a test as representative of performance in a target language use (TLU) domain, the first step is to examine whether the test tasks and TLU tasks are comparable. One challenge is that little is known about the actual language tasks that students engage in in EMI settings. The literature on EMI has focused on students’ and instructors’ perceptions of EMI implementation (e.g., Lee and Lee, 2018; Vu and Burns, 2014) and the effect of EMI on language and content learning (e.g., Joe and Lee, 2013; Lei and Hu, 2014) but not on tasks that students engage in using English.

The purpose of this study is to investigate the characteristics of English reading tasks in university courses in South Korea and to compare the characteristics of those reading tasks to those in the reading sections of TOEFL iBT and IELTS. This study focuses on reading tasks specifically because there is a heavier emphasis on English reading skills than on other English skills in university courses in non-English-speaking countries. Since almost all of the international academic journals are written in English, reading academic papers written in English is not an option but a requirement in higher education (Altbach, 2004; Phan, 2013). Therefore, not only EMI courses but many other non-EMI courses in Korean universities require reading textbook chapters or journal articles written in English. Examining the actual reading tasks that students are required to do in their courses would contribute to the EMI literature, as little is known about the actual reading tasks that students need to engage in in EMI courses. Second, comparing reading tasks in EMI settings to the ones in international English proficiency tests will provide evidence for the domain description inference in the test validity argument, which requires investigating whether “the observations of test performance reveal relevant knowledge, skills, and abilities in situations representative of those in the target domain” (Chapelle et al. 2008, p. 14). Although TOEFL iBT or IELTS scores are not universally required by Korean universities, the findings of this study have the potential to spark discussions and further research on how to better prepare Korean university students for EMI courses. They may also prompt consideration of adapting and improving entrance exams to more accurately assess the types of reading tasks students are likely to encounter in their courses.

Literature review

Reading in university courses

Several studies have explored language tasks in North American university courses to inform the development of tasks for TOEFL iBT (e.g., Biber et al. 2002; Hale et al. 1996; Rosenfeld et al. 2001), most of which focused on writing. Rosenfeld et al. (2001), one of the few studies that addressed reading, surveyed 370 university faculty and 345 undergraduate and graduate students across different disciplines from 21 universities in the USA and Canada, asking them to judge the importance of tasks for academic success. Both undergraduate and graduate faculty agreed that the most important reading skills were “reading text material with sufficient care and comprehension to remember major ideas” and “reading and understanding written instructions/directions concerning classroom assignments and/or examinations.” Although these two skills were ranked among the most important reading skills by students as well, both undergraduate and graduate students rated “determining the basic theme (main idea) of a passage” as the most important reading skill.

Carson (2001) examined actual course artifacts including course texts, exams, and assignments, and interviewed instructors and students from six undergraduate and graduate courses at a USA university. She found that retrieval and application of information from texts were more important in undergraduate courses, whereas interpretation and evaluation were more important in graduate courses. The most common organization of texts across disciplines included analysis and classification, and narrative and chronological presentation was found in some of the course texts depending on the course objectives. The most common question type of in-class exams was recognize/retrieve/identify and synthesizing information from texts. Hartshorn et al. (2017) surveyed 141 professors from five majors in a USA university and examined the reading purposes for upper-division courses. Results showed that the most important reading purposes were ‘understanding course content,’ ‘understanding discipline-specific information,’ and ‘synthesizing.’

Liu and Brown (2019) developed a questionnaire with 21 reading and reading-related skills and asked 221 undergraduate students in a New Zealand university to rate the need for each skill for their university studies. Based on factor analyses, the authors found five distinguishable subdomains that emerged: textbase comprehension (understanding main ideas and details), understanding pragmatic and rhetorical communication, information reconstruction and intertextual model building, expeditious reading, and global situation model building. The subdomain of textbase comprehension was reported to be the most important, which included identifying the main ideas of a text, understanding the stated and implied meaning of sentences, and understanding the details in a text.

Although the aforementioned studies have contributed to our understanding of reading skills needed for academic success and reading tasks at the university level, all of the studies discussed above explored reading in university courses in English-speaking countries. The only study that examined reading in EMI contexts was Owen et al. (2021), which examined the reading practices of university students in Nepal and Sweden. Nine students studying at a Swedish university and 19 students studying at a Nepali university completed reading logs, reporting what they read, reading purpose, language, and length. In addition, 60 students from the Swedish university and 69 students from the Nepali university participated in a questionnaire about their reading practices. The findings suggested that students in Sweden and Nepal employ different skills and strategies, although reading in English was a crucial part of academic success in both EMI contexts.

Comparability of reading tasks in English proficiency tests and in actual university courses

Although comparative analysis of test tasks and TLU tasks is essential to provide support for the domain description inference in a test validity argument, few studies have focused on reading. Two studies compared language features in IELTS reading tests and those in reading tasks in university courses (Moore et al. 2012; Weir et al. 2012). Weir et al. (2012) conducted a series of studies to investigate the relationship between academic reading at a British university and reading in the IELTS Reading Test. The researchers recruited 642 undergraduate and 119 graduate students and administered questionnaires on the genres of their course readings, perceived importance of different reading purposes, reading strategies, and reading-related difficulties. Their responses were compared to the three experts’ records of reading strategies use when responding to IELTS reading tasks. They found that the IELTS tasks mostly elicited careful reading, whereas students responded that expeditious reading with skimming and scanning skills was more useful for readings for their university courses. Additionally, the researchers compared 42 IELTS test texts and 42 extracts from 14 undergraduate textbooks based on contextual parameters including grammatical features, discourse features, reader-writer relationship, content knowledge, and cultural specificity. They concluded that the contextual parameters of the IELTS texts were within those of undergraduate textbooks, although for many parameters the IELTS texts imposed less demands on readers than undergraduate textbooks.

Moore et al. (2012) compared reading tasks in IELTS practice tests and 12 undergraduate university courses in an Australian university, focusing on the level of engagement (whether a reader needs to engage with local or global levels of text) and types of engagement (whether a reader’s engagement with texts needs to be literal or interpretive). The researchers found that readings in the two contexts were similar in that most tasks in both the IELTS reading test and the course readings required a reader’s local and literal engagement. However, compared to the IELTS reading test where all reading tasks had a ‘local-literal’ orientation, course reading tasks showed more variety of engagement types.

The results of the two studies are consistent in that they found that test tasks and university tasks are generally comparable but university reading tasks had a wider range of text types than test tasks. However, since both studies compared IELTS reading tests to university course readings in English-speaking countries, it is still unclear whether test tasks are also comparable to English reading tasks in EMI courses in universities in countries where English is not a dominant language. Owen et al. (2021), as discussed earlier, did not directly compare characteristics of reading tasks in university courses and reading proficiency tests, but they employed a questionnaire to examine the perceptions of students in a Swedish university and a Nepali university of the comparability of reading tasks in their EMI courses and on TOEFL iBT. They found that the overwhelming perceptions of participants were that their performance was impacted by reading topics and that tests should reflect their field of study. The authors concluded that although the empirical studies have suggested test topic is not a factor in reading performance, incorporating test takers’ perceptions is an important part of the validation of test use in a specific context.

Research questions

The underlying assumption of using TOEFL iBT or IELTS scores as part of the admissions process is that the English proficiency test tasks require similar skills and abilities to those that are required in the target university setting, and thus the scores can be interpreted as applicants’ ability to use English in the target setting. However, few studies have examined the characteristics of tasks that students are expected to accomplish in EMI courses in Asian countries.

This study will address this gap by investigating the English reading tasks in Korean university courses and comparing the reading tasks in EMI courses to those in two major international English proficiency tests. Specifically, the following three research questions are addressed in this study:

  1. 1.

    What are the characteristics (genre, function, purpose, and required skills) of English reading tasks that are used in undergraduate and graduate EMI courses at a Korean university?

  2. 2.

    What are the characteristics of reading test tasks in TOEFL iBT and IELTS and how are they similar or different to those of English reading tasks in undergraduate and graduate EMI courses at a Korean university?

  3. 3.

    To what extent do Korean undergraduate and graduate students perceive reading tasks in TOEFL iBT and IELTS to be comparable to the English reading tasks in their EMI courses in terms of the structure of the text, question type, and difficulty of texts and questions?

Method

Participants

Participants include 54 undergraduate and graduate students enrolled in one or more content courses (not English language courses) that require reading in English at a Korean universityFootnote 3. Among the 54 participants, 44 (81%) were undergraduate students and 10 (19%) were graduate students (see Table 1). Table 2 shows the distribution of participants across disciplines. For analytic purposes, disciplines are grouped into three academic areas following previous studies (e.g., Liu and Brown, 2019; Moore et al. 2012): Humanities and Arts, Social Sciences, and Natural Sciences, Engineering, and Math.

Table 1 Distribution of academic year of participants
Table 2 Distribution of participants across academic areas and disciplines

Instruments and materials

Online questionnaire

The online questionnaire was administered using Qualtrics and included questions regarding the characteristics of reading tasks assigned in EMI courses (genre, text function, reader purpose, and required post-reading activities). Since the reading texts and tasks assigned to students taking various subjects may vary, students were asked to choose only one of the EMI courses that they were taking and respond to the questions. Therefore, one student’s responses to the questions were related to only one course (i.e., one subject area). The questionnaire asked students to report on the genres, text function, reader purpose, and required post-reading activities. The genres for EMI reading tasks included the following:

  • Textbooks (a book that contains detailed information about a subject for people who are studying that subject. e.g., a chemistry textbook),

  • Non-textbook books (any book that is not specifically designed for a standard work for the study of a particular subject. e.g., fiction and non-fiction books),

  • Lecture slides (lecture notes or slides that lecturers use for each class, e.g., slides in .ppt or .pdf format),

  • Newspaper/magazine articles (publications in newspaper or magazine. e.g., articles in the New York Times,

  • Academic journal articles (papers published in academic journals. e.g., papers in Language Testing in Asia), and

  • Classmates' writings (classmates' writings that students need to read in or after class. e.g., reading and leaving comments on other students’ posts on an online class platform such as Blackboard).

The questions for text function and reader purpose were developed based on the classification of reading texts and tasks for the TOEFL iBT reading framework in Enright et al. (2000). Enright et al. (2000) conceptualized academic reading abilities as a construct that includes four reader purposes: (1) reading to find information; (2) reading for basic comprehension; (3) reading to learn; and (4) reading to integrate information across multiple texts. This reader-purpose perspective has been considered to be an effective taxonomy of reading practice as “the defining notions are fully interpretable as concepts associated with reading comprehension” (Enright et al. 2000, p. 4) and has been used as a framework for the TOEFL iBT reading test.

As part of the questionnaire, students read a set of reading tasks from TOEFL iBT and another set from IELTS and were asked questions for each set regarding the comparability of the reading test tasks and their course reading tasks in terms of the structure of the text, question type, and the difficulty of texts and questions. A five-point Likert scale was used for these questions. They were also asked to explain their answers. In addition, students were asked to upload the EMI course syllabus.

A draft of the questionnaire was piloted with three students and subsequently revised to clarify the language of the questions and adjust the expected time needed to complete the questionnaire. The questions on the questionnaire were provided in both English and Korean in order to minimize any misunderstanding.

Reading sections of TOEFL iBT and IELTS

Tasks from four sets of TOEFL iBT reading tests and four sets of IELTS Academic reading tests were analyzed in terms of the genre, text function, and reader purpose, based on Enright et al. (2000). The test materials were publicly available in Official TOEFL iBT Tests Vol. 1 (Third Edition) (2018) and IELTS Academic 14 (2019), official practice books published by ETS and Cambridge ESOL.

The TOEFL iBT reading test “reflects the types of reading that occur in university-level academic settings” (Enright et al. 2000, p. 14). Test takers read 3 or 4 passages and answer 30 to 40 multiple-choice questions, and the reading passages are excerpts from textbooks that would be used in introductory university coursesFootnote 4. The IELTS Specification published in 1989 stated that the IELTS reading test intends to test skills including identifying structure, following instructions, finding main ideas, identifying the underlying theme, identifying relationships between the main ideas, identifying and comparing facts, evaluating evidence, formulating a hypothesis, reaching a conclusion, and drawing logical inferences (as cited in Alderson, 2000, p. 131). Test text sources include magazines, journals, books, and newspapers that are written for a non-specialist audience (IELTS, 2007). The current IELTS Academic reading test consists of 3 passages with 40 questionsFootnote 5.

Data analysis

To answer the first research question, about the characteristics of English reading tasks in EMI courses, the student responses collected through the online questionnaire were analyzed using descriptive statistics in terms of primary text function and reader purpose, following the classification of reading texts and tasks in Enright et al. (2000). This classification was appropriate for this study because it proposed types of text and tasks in academic reading at the university level. In addition, the genre of the reading tasks and required activities after reading were descriptively analyzed for a fuller understanding of the reading tasks. Reading tasks used in undergraduate and graduate courses across three academic areas (Humanities and Arts (HA), Social Sciences (SS), and Natural Sciences, Engineering, and Math (NEM)) were described.

For the second research question, the characteristics of reading tasks in EMI courses as reported by the students in the online questionnaire were compared to those of reading tasks in TOEFL iBT and IELTS, in terms of genre, primary text function, and reader purpose.

For the third research question, students’ perceptions of comparability of reading tasks in EMI courses and reading proficiency tests in terms of the structure of the text, question type, and the difficulty of texts and questions were analyzed using descriptive statistics for the TOEFL iBT reading tasks and the IELTS reading tasks.

Results

Characteristics of EMI English reading tasks

Genre

Table 3 shows the number of participants who selected each genre for the following question: What kind of reading are you expected to do for this course? As shown in Table 3, the genre of texts that students were expected to read in EMI courses varies quite significantly by academic area. The most common reading assignments for HA courses included non-textbook books (77%) and lecture slides (54%). The most common reading assignments for SS and NEM courses were lecture slides (SS 75%, NEM 86%), academic journal articles (SS 75%, NEM 52%), and textbooks (SS 55%, NEM 81%). HA courses which most commonly used non-textbook books least commonly used lecture slides and textbooks, whereas NEM courses which most commonly used lecture slides and textbooks rarely used non-textbook books.

Table 3 Genre distribution of EMI reading texts (number of courses with each genre of reading)

Text function

Table 4 shows the number and the percentage of reading tasks based on student responses to the following question: What is the primary function of the texts you have to read? (Select all that apply.) This question was asked for each genre of EMI texts that students reported reading in their EMI courses. Each number in the cell represents the number of reading tasks that were reported to have each text function. Students could select more than one text function for reading a particular genre and that is why the total number of responses is greater than the number of reading tasks from each academic area. Numbers in the parentheses represent the percentage of the number of reading tasks for each cell to the total number of reading tasks in each academic area. For example, 18 out of 27 HA reading tasks (67%) were reported to have a primary function of define/describe/elaborate/illustrate.

Table 4 Primary function of EMI reading texts

As shown in Table 4, the primary function of the majority of texts across all three academic areas included define/describe/elaborate/illustrate, although there were differences in proportion. The proportion of texts with the primary function of define/describe/elaborate/illustrate was higher in SS courses (88%) and NEM courses (86%) compared to HA courses (67%). The proportion of texts with the primary function of problem/solution was highest in NEM courses (49%) and lowest in HA courses (7%), with SS courses being in the middle (30%).

Reader purpose

Table 5 shows the number and the percentage of students who selected each response option to the following question: How would you describe the purpose of your reading when you read [the genre that students reported reading in their EMI course]? (Select all that apply.)

Table 5 Reader purposes of EMI reading tasks

In general, students responded that the reader purposes of the majority of all texts included ‘reading to find information and for basic comprehension’ and ‘reading to learn’ across three academic areas. However, more than half of reading tasks in HA courses (52%) were reported to have a reader purpose of ‘reading to integrate,’ whereas fewer reading tasks in SS (38%) and NEM courses (29%) were reported to have ‘reading to integrate’ as a reader purpose.

Post-reading required activities

 Table 6 shows the number and the percentage of students who selected each response option to the following question: What are the required activities you should complete after reading [the genre that students reported reading in their EMI course]? (Select all that apply.)

Table 6 Required activities after reading

HA and SS courses required a wider range of activities after reading. Specifically, the majority of HA reading tasks were followed by four activities: give a presentation (67%), participate in classroom discussions (63%), write a research paper (52%), and take a written exam (48%). The majority of SS reading tasks were used for two activities: to participate in classroom discussions (64%) and to take a written exam (55%). About a third of the SS reading tasks were used for giving a presentation (38%). In NEM courses, however, the only activity that the majority of NEM reading tasks were used for was taking a written exam (73%).

Comparison of English reading tasks in TOEFL iBT and EMI courses

Table 7 shows the genre distribution of the TOEFL iBT Reading texts. Four TOEFL iBT reading tests were analyzed, which consisted of 12 texts and 167 questions. There was a large difference between the genre of the TOEFL iBT reading texts and the genre of the EMI reading texts.

Table 7 Genre distribution of the TOEFL iBT reading texts

All of the TOEFL iBT reading texts that were examined were textbook-like, whereas a wide range of genres were found in EMI reading texts including textbook, non-textbook books, lecture slides, newspaper/magazine articles, academic journal articles, and classmates’ writings (see Table 3). As shown in Tables 3 and 7, the genre distribution of the TOEFL iBT reading texts was relatively similar to the genre distribution of reading texts assigned in NEM courses in that most NEM courses (81%) assigned textbooks and the range of genres was narrower in NEM courses than in HA or SS courses. The genre distribution of the TOEFL iBT reading texts was very different from reading texts in HA courses in that not many HA courses (23%) assign textbooks as a reading task. The TOEFL iBT reading texts were especially different from reading texts in SS courses in terms of the range of the reading text genre. While TOEFL iBT reading texts consisted of only textbook-like texts, reading texts in SS courses included a variety of reading text genres such as non-textbook books, lecture slides, academic journal articles, and classmates’ writings.

Table 8 shows the primary function of the TOEFL iBT Reading texts. The primary function of the majority of TOEFL iBT reading texts was to define/describe/elaborate/illustrate. This pattern was similar to EMI reading texts, most of which were reported to have a primary function of define/describe/elaborate/illustrate across all three academic areas (see Table 4). Compare/contrast/classify was the primary function of some TOEFL iBT reading texts and it was the primary function of about half of EMI reading texts across three academic areas. However, a wider range of primary functions of reading texts were reported to be found in EMI reading texts (see Table 4).

Table 8 Primary function of the TOEFL iBT reading texts

Table 9 shows the reader’s purposes for the TOEFL iBT Reading Test questions. Since the reader purposes for TOEFL reading texts were determined based on the reading questions, they cannot be directly compared to the reader purposes of EMI reading texts which do not have any specific reading questions. Generally, the reader purposes of TOEFL and EMI reading texts across three academic areas were similar in that the majority of the reading texts in both contexts required ‘reading to find information and reading for basic comprehension,’ as shown in Tables 6 and 9. Both TOEFL and EMI reading texts across three academic areas also required ‘reading to learn,’ although this reader purpose was more often required for EMI reading texts. There was a large difference between the TOEFL and EMI reading texts in terms of ‘reading to integrate.’ There was only one out of 167 TOEFL reading questions that required ‘reading to integrate,’ whereas this reader purpose was required in 52% of the HA texts, 38% of the SS texts, and 29% of the NEM texts. However, it should be noted that test-takers read to integrate in one of the two TOEFL Writing tasks. In the Integrated Writing task, test-takers read a passage, listen to a short lecture about a topic, and write an essay.

Table 9 Reader purposes of the TOEFL iBT reading test questions

Comparison of English reading tasks in IELTS and EMI courses

Table 10 shows the genre distribution of the IELTS Reading texts. Four IELTS reading tests were analyzed, which consisted of 12 texts and 160 questions.

Table 10 Genre distribution of the IELTS reading texts

According to Tables 3 and 11, the IELTS reading texts and the EMI reading texts showed a wide range of different reading text genres. Both IELTS and EMI reading texts included textbooks, newspaper/magazine articles, and academic journal articles. However, the genre distributions of the IELTS and EMI reading texts were very different. Few IELTS reading texts were from textbooks, whereas textbooks were often assigned in EMI courses, especially in SS and NEM courses. The majority of IELTS reading texts were from newspaper/magazine articles, which was one of the least common reading assignments in EMI courses across all three academic areas. Both IELTS and EMI reading texts involved academic journal articles, but they were much more common in EMI courses, especially in SS and NEM courses.

Table 11 Primary function of the IELTS reading texts

Table 11 shows the primary function of IELTS Reading texts. In general, IELTS reading texts and EMI reading texts consisted of texts with a wide range of primary functions, as shown in Tables 5 and 12. However, there was a much higher proportion of texts with the primary function of define/describe/elaborate/illustrate and compare/contrast/classify in EMI reading texts across all three academic areas compared to IELTS reading texts. While 25% of the IELTS reading texts had a primary function of define/describe/elaborate/illustrate, 67%, 88%, and 86% of the HA, SS, and NEM texts, respectively, were reported to have this primary function. Also, about half of the HA, SS, and NEM texts were reported to have a primary function of compare/contrast/classify, whereas no IELTS texts had this primary function. The proportion of the IELTS texts with a primary function of problem/solution (8%) was similar to that of the HA texts (7%), which was much lower than those of the SS texts (30%) and the NEM texts (49%).

Table 12 Reader purposes of the IELTS reading test questions

Table 12 shows the reader purposes of the IELTS Reading Test questions. Generally, both IELTS and EMI reading texts required ‘reading to find information and reading for basic comprehension,’ as shown in Tables 6 and 12. Both texts across the three academic areas also required ‘reading to learn,’ although this reader purpose was more often required for EMI reading texts. There was a clear difference between the IELTS and EMI reading texts in terms of ‘reading to integrate.’ There was no IELTS reading question that required ‘reading to integrate,’ whereas this reader purpose was required in 51.85% of the HA texts, 37.50% of the SS texts, and 29.41% of the NEM texts.

Students’ perceptions of the comparability of reading tasks in TOEFL iBT and IELTS and EMI courses

Structure

Figures 1 and 2 show the percentage of participants who selected each response option to the following two statements, one focused on TOEFL and one focused on IELTS: The structure of the (TOEFL/IELTS) reading text is comparable to that of what I read for my course.

Fig. 1
figure 1

Perceived comparability of text structure between TOEFL iBT and EMI reading texts

Fig. 2
figure 2

Perceived comparability of text structure between IELTS and EMI reading texts

As shown in Fig. 1, approximately 70% of students in HA and SS courses either disagreed or strongly disagreed that the structure of the TOEFL reading text was comparable to texts in their EMI course. When asked to explain their responses, both HA and SS students indicated that the TOEFL reading text was more simple, clearly structured, and more informative, whereas EMI reading texts were more complex in structure, either being more persuasive based on various sources of evidence or being more narrative without a clear structure.

A higher percentage of students in NEM courses compared to those in HA and SS courses agreed that the TOEFL iBT and EMI reading texts were comparable in terms of structure: 50% of the students either agreed or strongly agreed. When asked, NEM students explained that the two texts are similar in that both are structured with a topic or phenomenon and supporting details and evidence. By contrast, the other 50% of students in NEM courses either disagreed or strongly disagreed with the statement. They indicated that their EMI reading texts consisted more of simple definitions and examples, rather than long descriptions or explanations of a topic as in the TOEFL reading texts.

As Fig. 2 indicates, approximately 65% of students in HA and NEM courses either disagreed or strongly disagreed that the structure of the IELTS reading text was comparable to that of what they read for their EMI course. When asked to explain their responses, HA students noted that the IELTS reading text was more clearly organized with paragraphs with a clear thesis to persuade readers, whereas their EMI reading texts are more narrative with a more complex structure, and do not usually support a certain position. NEM students explained that the IELTS reading text was closer to persuasive writing with a claim and supporting evidence, whereas their EMI reading texts are more informative and organized in brief sentences to clearly convey key ideas.

On the other hand, 63% of students in SS courses either agreed or strongly agreed that the structure of IELTS reading texts and EMI reading texts was comparable. When asked to explain, SS students indicated that both IELTS and EMI reading texts introduce a topic with examples and explanations. For example, a student explained that “the two texts are similar in that both present a theory with real-life examples or findings from recent studies.” Conversely, 37% of students in SS courses either disagreed or strongly disagreed with the statement. They explained that their EMI reading texts refer to existing literature and include tables and graphs to support their ideas whereas these were missing in the IELTS reading texts.

Reading questions

Figures 3 and 4 show the percentage of participants who selected each response option to the following statements about TOEFL and IELTS reading questions: The types of the (TOEFL/IELTS) reading questions are comparable to those of the questions I have to answer about the text in my (EMI) course.

Fig. 3
figure 3

Perceived comparability of the type of questions between TOEFL iBT and EMI reading tasks

Fig. 4
figure 4

Perceived comparability of the type of questions between IELTS and EMI reading tasks

As shown in Fig. 3, most students in all three academic areas either disagreed or strongly disagreed with the statement (HA 91%, SS 80%, NEM 90%). As Fig. 4 indicates, similar results were found for IELTS, although the percentage of students who disagreed or strongly disagreed was slightly lower than that for TOEFL reading questions (HA 75%, SS 66%, NEM 65%). When asked to explain, students consistently noted that the questions they answered for their EMI reading texts required more analytical and critical thinking and expression of their personal evaluation of the topic based on the reading, whereas the TOEFL/IELTS reading questions mainly checked their understanding of the main ideas, vocabulary, and details.

Difficulty in reading text

Figures 5 and 6 show the participants’ responses to a question about the difficulty of the TOEFL/IELTS reading texts in comparison to the reading texts they read in their EMI course, specifically, whether they found TOEFL/IELTS texts to be easier, similar, or more difficult.

Fig. 5
figure 5

Perceived difficulty of TOEFL iBT reading text compared to EMI reading text

Fig. 6
figure 6

Perceived difficulty of IELTS reading text compared to EMI reading text

There were stark differences in students’ perceptions of the difficulty of TOEFL iBT texts across academic areas. As shown in Fig. 5, 69% of HA, 35% of SS, and 16% of NEM students responded that the TOEFL reading text was easier than their EMI reading texts. For IELTS, as indicated in Fig. 6, 58% of HA, 61% of SS, and 30% of NEM students responded that the IELTS reading text was easier. These students indicated that the TOEFL/IELTS reading text was easier because it has a clearer structure, shorter sentences, and easier vocabulary that does not require a lot of background knowledge. For example, an SS student said that “the vocabulary and sentence structures are easier in TOEFL reading text, and the topic of TOEFL reading text was less specialized, which made it easier to read.”

On the other hand, 23% of HA, 55% of SS, and 63% of NEM students responded that the difficulty of the TOEFL reading text was similar to that of the readings they had to do for their EMI course. For IELTS, 33% of HA, 39% of SS, and 40% of NEM students responded that the difficulty was similar. Many of them identified that the level of vocabulary and the way an idea was discussed were similar in the two texts. For example, one student in an SS course noted that “the references to specific names and arguments supported by research, and mentioning of recent trends in IELTS texts are similar to the texts I read for my course.”

Eight percent of HA, 10% of SS, and 21% of NEM students responded that the TOEFL reading text was more difficult than their EMI reading texts. Only one student in HA courses (8%) and no students in SS courses responded that the IELTS reading text was more difficult than their EMI reading texts. However, 30% of NEM students responded that the IELTS reading text was more difficult than their EMI reading texts. They explained that their EMI reading texts were more straightforward with shorter sentences, whereas the IELTS reading text involved a variety of sources (e.g., examples, research findings) to discuss a topic, using longer sentences and more unfamiliar words.

Difficulty in reading questions

Figures 7 and 8 show the participants’ responses to a question about the difficulty of the TOEFL/IELTS reading questions in comparison to the questions they had to answer about the text in their EMI courses, specifically, whether they found TOEFL/IETLS questions to be easier, similar, or difficult.

Fig. 7
figure 7

Perceived difficulty of TOEFL iBT reading questions compared to EMI reading questions

Fig. 8
figure 8

Perceived difficulty of IELTS reading questions compared to EMI reading questions

Most students responded that the TOEFL/IELTS reading questions were easier (HA: 77%, SS: 61%, NEM: 45% for TOEFL; HA: 67%, SS: 61%, NEM: 50% for IELTS). When asked to explain their responses, these students consistently indicated that the questions they had to answer about their EMI reading texts required not only an understanding of the given texts but also additional research into a topic and skills to integrate different texts into analytical writing or presentation that shows their own perspective or interpretation.

More students in NEM courses selected “similar” than those in other courses (HA 8%, SS 17%, NEM 35% for TOEFL; HA 17%, SS 22%, NEM 25% for IELTS). Although many of these students said that it is hard to compare the TOEFL/IELTS reading questions and the questions about EMI reading texts, they indicated that they are similar in that they require a general understanding of the texts. Lastly, those who responded that the TOEFL/IELTS reading questions were more difficult (HA 15%, SS 22%, NEM 20% for TOEFL; HA 17%, SS 17%, NEM: 25% for IELTS) explained that they found the TOEFL/IELTS reading questions more difficult because those questions required very detailed reading of the given text, whereas the questions about EMI reading texts usually required only general understanding of the texts.

Discussion and conclusion

The purpose of this study was to investigate the characteristics of English reading tasks in EMI university courses in Korea, the extent to which they are comparable to the characteristics of reading tasks on TOEFL iBT and IELTS, and Korean university students’ perceptions of the comparability.

Characteristics of reading tasks in EMI courses

Readings assigned in EMI courses were not limited to textbooks but included various genres such as non-textbook books, lecture slides, and academic journal articles. Similar results were found by Weir et al. (2012): students in a UK university responded in a survey that most courses require reading books, but they had to read websites, journals, reports, newspapers, and magazines. That students in HA disciplines were expected to read more and beyond just textbooks was in line with the findings of Moore et al. (2012), who found in their interviews with lecturers at an Australian university that “whereas the softer humanities disciplines required extensive reading, and from a range of different sources and genres, in the harder more technical areas reading was found to be less extensive, and mainly confined to the reading of the prescribed textbook in a subject” (p. 42).

The primary function of the majority of texts in EMI courses across all three academic areas included define/describe/elaborate/illustrate. Slightly different results were found in Carson (2001), who reported that the most common organization of texts included classification across all disciplines in a USA university. Still, both define/describe/elaborate/illustrate and classify are categorized under “exposition” (Enright et al. 2000), suggesting exposition is the most common type of text function in both EMI and English-speaking university contexts.

In general, students across the three academic areas responded that the reader purposes of the majority of all texts included ‘reading to find information and reading for basic comprehension’ and ‘reading to learn’. This is comparable to the findings in previous studies in the context of English-speaking universities that found that understanding main ideas and details was the most important reading skill (Carson, 2001; Liu and Brown, 2019; Rosenfeld et al. 2001). On the other hand, there were more reading tasks in HA courses that required a reader purpose of ‘reading to integrate’ than those in SS and NEM courses. Moore et al. (2012) also found that basic comprehension of the material was focused on in more technical disciplines, whereas more interpretive reading skills were emphasized in more humanities-oriented disciplines. They explained that basic understanding was also considered to be important in humanities-oriented disciplines, but it was often assumed and taken for granted.

In sum, some characteristics of reading tasks in EMI courses such as the primary function and the reader purpose were common across academic areas, whereas genre and required activities after reading varied across academic areas. Another interesting finding is that reading tasks in EMI courses appear to share several characteristics with those in courses in the USA, UK, and Australian contexts, although the characteristics found in this study cannot be directly compared to the findings from previous studies because different frameworks were used to examine the characteristics of reading tasks. This finding provides supporting evidence for the use of international tests in this EMI context.

Comparison of reading tasks in EMI courses and TOEFL iBT/IELTS

Analyses revealed that reading tasks in EMI courses and TOEFL and IELTS had some key characteristics in common. However, the most important difference between the reading tasks in the two contexts was that there was more variety in task characteristics in EMI reading tasks than in test tasks, which makes sense given that tests are limited to a few hours.

The TOEFL iBT reading texts that were examined were textbook-like, whereas a wide range of genres was found in EMI reading texts. For both TOEFL iBT and EMI reading texts, the two major primary functions were define/describe/elaborate/illustrate and compare/contrast/classify. However, a wider range of primary functions of reading texts was reported in EMI reading texts. The IELTS reading texts were similar to the EMI reading texts in that both showed a wide range of genres and primary functions, but the percentages of each text genre and primary function of the IELTS and EMI reading texts were very different.

Generally, the reader purposes of TOEFL iBT/IELTS and EMI reading texts across the three academic areas were similar in that the majority of all reading tasks required the reader to read ‘to find information and for basic comprehension’. All readings also required the reader to read ‘to learn,’ although this reader purpose was more often reported for EMI reading texts. There was a large difference between the tests and EMI reading texts in terms of the reader purpose of ‘reading to integrate’ with much fewer reading questions from the tests requiring ‘reading to integrate’ than reading questions in EMI courses. That university courses often require students to read ‘to integrate’ was found in studies in the context of English-speaking universities (Carson, 2001; Hartshorn et al. 2017). Moore et al. (2012) also noted that IELTS and academic reading tasks are similar in that they require basic comprehension of reading material, but they are different in that there was a considerable variety of reading tasks that required interpretive and integrated skills in Australian university courses.

Overall, the findings suggest that reading tasks in the two English proficiency tests are more limited in range than those in EMI courses, suggesting that the language skills and abilities that are elicited by the test tasks may not fully reflect those elicited by the actual language tasks in EMI universities. This finding provides partial support for the domain description inference in the test validity argument, as the test performance of students may not reveal the full range of skills and abilities needed in the EMI university context. Of course, it would not be possible for an English proficiency test to represent a wide range of TLU task characteristics due to practical constraints. However, for test performance to “reveal relevant knowledge, skills, and abilities in situations representative of those in the target domain” (Chapelle et al. 2008, p.14), a higher level of correspondence would provide more supporting evidence for the domain description inference.

Students’ perceptions of the comparability of EMI reading tasks and TOEFL iBT/IELTS reading tasks

Students’ perceptions of the comparability of EMI reading tasks and the test reading tasks varied significantly based on academic area. The majority of students in HA courses responded that the TOEFL reading text was easier than their EMI reading texts, while the majority of students in SS and NEM courses responded that the difficulty of the TOEFL reading text was similar to that of the readings they have to do for their EMI course. For IELTS, the majority of students in HA and SS courses responded that the IELTS reading text was easier than their EMI reading texts. On the other hand, more students in NEM courses than in other courses responded that the difficulty of the IELTS reading text was similar to or even more difficult than that of their EMI readings.

The majority of students in all three academic areas responded that the TOEFL/IELTS reading questions were easier than the questions they had to answer about their EMI texts. However, it is interesting to note that for both TOEFL iBT and IELTS, more students in NEM courses than those in HA and SS courses responded that the difficulty of the TOEFL iBT/IELTS reading questions was similar to that of the questions they had to answer about their EMI reading texts. Similar results were found in Moore et al. (2012) in their interview with lecturers in an Australian university: lecturers from more technical disciplines perceived the IELTS reading tasks to be comparable or even more complex, whereas lectures from humanities disciplines had a more critical view on the comparability of the IELTS and their course reading tasks.

Overall, students in NEM courses perceived reading tasks in their EMI courses and in TOEFL iBT/IELTS to be comparable, whereas students in HA courses perceived them to be least comparable. The findings suggest that the language skills and abilities that are elicited from the test tasks may reflect those elicited from the actual language tasks in some academic areas more than those in other academic areas. In other words, the domain description inference of an English proficiency test might be supported to different extents across disciplines. Llosa et al. (2017) found similar variability in the context of TOEFL iBT writing. They found that the writing elicited by the TOEFL iBT tasks was more representative of the types of writing students produce in some academic areas (e.g., Humanities and Social Sciences) and less representative of the writing they produce in others (e.g., Biological and Health Sciences).

This study is not without limitations. Only 54 students completed the questionnaire as data collection was interrupted by school closures and shutdowns due to COVID-19, and the data was collected from only one university. This made it difficult to make comparisons between years in the program or between undergraduate and graduate programs. The small sample size also constrained the application of more advanced analytical methods for group comparisons. Also, the course that was reported by a participant might not have been representative of all the courses in that discipline. Another future study could focus on one discipline and include a qualitative analysis of the EMI tasks used in that discipline. Future studies could use a larger sample to make comparisons across different levels of programs using more advanced analytical methods for comparisons. Future studies could also gather evidence about comparability from both students and instructors. In addition, different universities use different English proficiency tests for admissions or after enrolling in the university. Since it was not feasible to include many tests in our questionnaire, we only focused on TOEFL iBT and IELTS, two widely known international English proficiency tests. Therefore, the findings may not be relevant to many universities which use different tests. Users of other tests could compare the task characteristics of their test tasks and EMI reading tasks. Finally, future studies could go beyond comparing task characteristics of EMI language tasks and English proficiency test tasks to comparing students’ performance in the two contexts.

Availability of data and materials

The datasets analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. For example, Seoul National University requires a TOEFL iBT score (the minimum scores range from 81 to 114) or an equivalent English proficiency test score to apply for any graduate program, and 80 or higher TOEFL iBT score or an equivalent test score for undergraduate international applicants to apply for any program. Yonsei University and Korea University require a TOEFL iBT or IELTS score to apply for some graduate programs.

  2. For example, Korea Advanced Institute of Science and Technology (KAIST) requires newly admitted students take TOEFL iBT and provide additional English language courses based on their scores. Students can instead submit their own TOEFL iBT or IELTS scores if they already have one.

  3. The goal was to collect data from 100 participants but data collection which took place in Spring 2020 was interrupted by lockdowns in Korea due to COVID-19 pandemic.

  4. https://www.ets.org/toefl/ibt/about/content/

  5. https://www.ielts.org/en-us/about-the-test/test-format

Abbreviations

EMI:

English-medium instruction

TLU:

Target language use

HA:

Humanities and Arts

SS:

Social Sciences

NEM:

Natural Sciences, Engineering, and Math

References

  • Alderson, J. C. (2000). Assessing reading. Cambridge University Press. https://doi.org/10.1017/CBO9780511732935

    Article  Google Scholar 

  • Altbach, P. G. (2004). Globalisation and the university: myths and realities in an unequal world. Tertiary Education and Management, 10(1), 3–25. https://doi.org/10.1080/13583883.2004.9967114

    Article  Google Scholar 

  • Biber, D., Conrad, S., Reppen, R., Byrd, P., & Helt, M. (2002). Speaking and writing in the university: a multidimensional comparison. TESOL Quarterly, 36(1), 9–48. https://doi.org/10.2307/3588359

    Article  Google Scholar 

  • Byun, K., Chu, H., Kim, M., Park, I., Kim, S., & Jung, J. (2011). English-medium teaching in Korean higher education: policy debates and reality. Higher Education, 62(4), 431–449. https://doi.org/10.1007/s10734-010-9397-4

    Article  Google Scholar 

  • Cambridge ESOL. (2019). IELTS Academic 14. Cambridge

  • Carson, J. G. (2001). A task analysis of reading and writing in academic contexts. In D. Belcher & A. Hirvela (Eds.), Linking literacies: perspectives on L2 reading-writing connections (pp. 48–83). University of Michigan Press.

    Google Scholar 

  • Chapelle, C. A., Enright, M. K., & Jamieson, J. M. (2008). Building a validity argument for the test of English as a foreign language. (C. A. Chapelle, M. K. Enright, & J. M. Jamieson, Eds.). Routledge.

  • Enright, M. K., Grabe, W., Koda, K., Mosenthal, P., Mulcahy-Ernt, P., & Schedl, M. (2000). TOEFL 2000 reading framework: a working paper (TOEFL Monograph Vol.17). Educational Testing Service.

  • ETS. (2018). Official TOEFL iBT Tests Vol. 1 (Third Edition). McGraw Hill Education

  • Hale, G., Taylor, C., Bridgeman, B., Carson, J. G., Kroll, B., & Kantor, R. (1996). A study of writing tasks assigned in academic degree programs (Educational Testing Service Research Report 54). Educational Testing Service. https://doi.org/10.1007/978-1-349-20872-2

    Article  Google Scholar 

  • Hartshorn, K. J., Evans, N. W., Egbert, J., & Johnson, A. (2017). Discipline-specific reading expectation and challenges for ESL learners in US universities. Reading in a Foreign Language, 29(1), 36–60. http://hdl.handle.net/10125/66727

  • IELTS. (2007). The IELTS Handbook 2007. British Council.

  • Joe, Y. J., & Lee, H. K. (2013). Does English-medium instruction benefit students in EFL contexts? a case study of medical students in Korea. Asia-Pacific Education Researcher, 22(2), 201–207. https://doi.org/10.1007/s40299-012-0003-7

    Article  Google Scholar 

  • Kim, J., & Tatar, B. (2017). Nonnative English-speaking professors’ experiences of English-medium instruction and their perceived roles of the local language. Journal of Language, Identity and Education, 16(3), 157–171. https://doi.org/10.1080/15348458.2017.1295811

    Article  Google Scholar 

  • Kim, J., Tatar, B., & Choi, J. (2014). Emerging culture of English-medium instruction in Korea: experiences of Korean and international students. Language and Intercultural Communication, 14(4), 441–459. https://doi.org/10.1080/14708477.2014.946038

    Article  Google Scholar 

  • Lee, K., & Lee, H. (2018). Korean graduate students’ self-perceptions of English skills and needs in an English-medium instruction context. Journal of Multilingual and Multicultural Development, 39(8), 715–728. https://doi.org/10.1080/01434632.2018.1438442

    Article  Google Scholar 

  • Lei, J., & Hu, G. (2014). Is English-medium instruction effective in improving Chinese undergraduate students’ English competence? International Review of Applied Linguistics in Language Teaching, 52(2), 99–126. https://doi.org/10.1515/iral-2014-0005

    Article  Google Scholar 

  • Liu, X., & Brown, G. T. L. (2019). Investigating students’ perceived cognitive needs in university academic reading: a latent variable approach. Journal of Research in Reading, 42(2), 411–431. https://doi.org/10.1111/1467-9817.12275

    Article  Google Scholar 

  • Llosa, L., & Malone, M. E. (2017). Student and instructor perceptions of writing tasks and performance on TOEFL iBT versus university writing courses. Assessing Writing, 34, 88–99. https://doi.org/10.1016/j.asw.2017.09.004

    Article  Google Scholar 

  • Macaro, E., Curle, S., Pun, J., An, J., & Dearden, J. (2018). A systematic review of English medium instruction in higher education. Language Teaching, 51(1), 36–76. https://doi.org/10.1017/S0261444817000350

    Article  Google Scholar 

  • Moore, T., Morton, J., & Price, S. (2012). Construct validity in the IELTS Academic Reading test: a comparison of reading requirements in IELTS test items and in university study (IELTS Research Reports 11). British Council.

  • Owen, N., Shrestha, P. N., & Hultgren, A. K. (2021). Researching academic reading in two contrasting English as a medium of instruction contexts at a university level (ETS Research Report Series 2021). Educational Testing Service. https://doi.org/10.1002/ets2.12317

    Article  Google Scholar 

  • Park, S. (2021). 20년간 60% 줄어든 신생아... 인구부족 태풍, 학교·군대 덮친다 [Newborns decreased by 60% over 20 years... Population shortage typhoon hits schools and military.] Hankookilbo. https://www.hankookilbo.com/News/Read/A2021010415480000681?did=NA

  • Phan, H. . Le. . (2013). Issues surrounding English, the internationalisation of higher education and national cultural identity in Asia: a focus on Japan. Critical Studies in Education, 54(2), 160–175. https://doi.org/10.1080/17508487.2013.781047

    Article  Google Scholar 

  • Rosenfeld, M., Leung, S., & Oltman, P. K. (2001). The reading, writing, speaking and listening tasks important for academic success at the undergraduate and graduate levels (TOEFL Monograph Vol. 21). Educational Testing Service.

  • Vu, N. T. T., & Burns, A. (2014). English as a medium of instruction: Challenges for Vietnamese tertiary lecturers. Journal of Asia TEFL, 11(3), 1–31.

    Google Scholar 

  • Weir, C., Hawkey, R., Green, A., Unaldi, A., & Devi, S. (2012). The relationship between the academic reading construct as measured by IELTS and the reading experiences of students in their first year of study at a British university (IELTS Research Reports Vol. 9). British Council.

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

LL made substantial contributions to the conceptualization and design of the study. SY participated in the design of the study, collected, analyzed, and interpreted the student data regarding the characteristics of reading tasks in Korean university courses, and was a major contributor to writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Soohye Yeom.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Institutional Review Board (IRB) at the authors’ university (IRB-FY2020-3980). Informed consent to participate in this study was obtained from all participants.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yeom, S., Llosa, L. Comparability of reading tasks in high-stakes English proficiency tests and university courses in Korea. Lang Test Asia 14, 8 (2024). https://doi.org/10.1186/s40468-024-00281-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40468-024-00281-5

Keywords