Skip to main content

Using grounded theory to validate Bachman and Palmer’s (1996) strategic competence in EFL graph-writing

Abstract

Background

Research studies (e.g., Phakiti, Language Testing 25(2):237–272, 2008a; Phakiti, Language Assessment Quarterly 5(1): 20–42, 2008b; Purpura, Learner strategy use and performance on language tests: a structural equation modeling approach, 1999) conducted on the construct validation of Bachman and Palmer’s (1996) strategic competencehas been quantitative in nature. Furthermore, the nature of Bachman and Palmer’s (1996) strategic competence model with regard to graph-writing has remained unexplored.

Methods

The present study aimed at investigating the validity of the strategic competence model comprising of three components namely goal setting, assessment, and planning through grounded theory approach. To do so, 8 English Language students participated in this study during 10 weeks. They completed GPT task while thinking aloud their writing processes. Then, their retrospective interview protocols were collected. Observations of the students engaged in writing GPT were conducted, and writing samples from each student were provided by the observers. Qualitative analysis program, NVivo 10 imported, transcribed, summarized, and visualized the data. Consistent with constructivist grounded theory approach, the data were analyzed by initial coding, focused coding, axial coding, and theoretical coding.

Results

The model that explained the processes of this task included five major. These processes involved Analyzing Non-Graphic Information of Instruction, Translating Graphic and Non-Graphic Information into Written Discourse, Retrieving Personal Interpretation and Additional Reasoning, Evaluating Graph Comprehension, and Reformulating Graph Description into EFL Written Discourse.

Conclusion

This model yielded the recursive nature of the core processes in relation to different components of strategic competence and its credibility through data triangulation . Methodologically, such findings highlight the importance of considering test takers cognitive processes when designing and developing graph writing tasks.

Background

Examination of language test performance depends upon different factors causing the performance of each individual vary from each other. According to Bachman (1990) four main factors influence language test performance: test method facet; personal attributes; random factors; and communicative language ability. One of these mentioned factors is communicative language ability providing a framework about the ways in which specific abilities demonstrate how different individuals perform on a given language test. This framework consists of three main components: language competence, strategic competence, and psychophysiological mechanisms. Among these three components, strategic competence as mental capacity provides the link between language competencies, features of context of situation, language user’s sociocultural knowledge and real-world knowledge. Bachman (1990) presented the model of strategic competence consisting of three main components: planning, assessment, and execution. These areas of strategic competence provide the basis for individuals to identify the given information in the task, retrieve relevant items from language competence, and implement the plan through drawing on psychophysiological mechanism.

Later on, Bachman and Palmer (1996) considered communicative language ability as the main contributor to language performance. Given their model, Bachman and Palmer (1996) defined strategic competence as a set of metacognitive strategies, or components, which can be considered as higher order executive processes implementing a cognitive management function in language use. Based on their definition, strategic competence consists of three main components: goal-setting, assessment, and planning. These main components of strategic competence or metacognitive strategies interact with other areas of language ability (language knowledge), topical knowledge, and affective schemata features of context in which language use takes place. Conceptualization of strategic competence as metacognitive strategies helps us to make inferences about language test performance of individuals and design of language tests based on interactiveness of metacognitive strategies with features of context of language use.

In the domain of language testing, some studies (Phakiti 2008a; Purpura 1999) were conducted to validate strategic competence proposed by (Bachman and Palmer 1996) in reading comprehension through quantitative approaches. However, it was not possible to capture fully their cognitive process of test taking through experimental approaches as Bachman (1990) pointed out:

“A … critical limitation to correlational and experimental approaches to construct validation … is that these examine only the products of the test taking process, the test scores, and provide no means for investigating the processes of test taking themselves” (p. 269).

In this research study, the selection of a qualitative method rather than a quantitative method enabled us to establish an open approach to theory development. Since no research exists that addressed validation of strategic competence of Bachman and Palmer (1996) in light of International English Language Testing System (IELTS)- Graph Prompt Task (GPT) including bar graph as a prompt, using a Grounded Theory (GT) qualitative approach allowed the concepts and design to emerge from the data collected (Strauss and Corbin 1998). Moreover, the use of GT is an appropriate methodology when “when a phenomenon has not been adequately described, or when there are few theories that explain it (Henderson, as cited in Skeat & Perry, 2008, p. 97). The complex and unknown nature of strategic competence in writing test performance necessitates for making use of a qualitative approach to go through their mental processes of students and establish credibility of strategic competence regarding the emerging processes.

This methodology provided the rich description of the cognitive processes of GPT through triangulating the findings from multiple sources of data to validate strategic competence model, explain its main components at a theoretical level with reference to understanding of cognitive processes, and to “provide a meaningful guide to action” (Strauss and Corbin 1998, p. 12).

Hence, the purpose of this study is to validate the model of strategic competence presented by (Bachman and Palmer 1996) with respect to emerging mental processes of GPT through GT.

Validation of strategic competence in graph-writing can practically advance language testing writing research, academic literacy, and knowledge construction in academic writing research. Practically, this study can help teachers to find out how students approach and interact with different processes underlying GPT in order to improve the quality of writing instruction. According to Bachman (2004), studies underlying process are substantial if processes employed by test takers to complete the tasks match the processes that test developers intended to measure. To this aim, this study provides useful insight for language test developers to understand how test takers approach GPT. If the test-taking processes will not match with what the test developers expect, the construct validity of this task might be threatened. Hence, test developers should take more consideration into account when designing and validating these tasks through minimizing the threating factors.

From a methodological point of view, GT explains the emerging processes at theoretical levels unlike quantitative approaches which only examine the test scores regardless of processes test takers are involved to produce their response. Theoretically, the model grounded in data themselves demonstrate how students approach the two tasks underlying different cognitive operations. Hence, the emerging grounded models fills the existing gap in the literature by providing the thick description in GPT.

Literature review

In this study, it should be noted that the literature examination has been postponed till data analysis was completed. Reviewing literature prior to conducting this piece of research might have hampered us from exploring the emergent ideas. According to Strauss and Corbin (1998) “[in grounded theory approach] the researcher does not want to be so steeped in the literature that he or she is constrained and even stifled by it” (p. 49) and has to remain firmly grounded in the data “without any preconceived theory that dictates, prior to the research, ‘relevancies’ in concepts and hypotheses” (Glaser and Strauss 1967, p. 33).

Language competence

Canale and Swain (1980) proposed a theoretical framework of language competence consisting of three main components: grammatical competence, sociolinguistic competence, and strategic competence. In this framework, strategic competence referred to” verbal and non-verbal communication strategies that may be called into action to compensate for breakdowns in communication due to performance variables or to insufficient competence” (p. 30). In their model, strategic competence can be described as providing compensatory function when the linguistic competence of the language users is inadequate. Canale (1983) has expanded the definition of strategic competence in a way that it includes both the compensatory characteristics of communication strategies and the enhancement characteristics of production strategies:

“mastery of verbal and nonverbal strategies both (a) to compensate for breakdowns in communication due to insufficient competence or to performance limitations and (b) to enhance the theoretical effect of utterances” (p. 339).

While these definitions indicate the function of strategic competence in facilitating communication, they do not explain the mechanisms by which strategic competence operates.

Another definition provided by Candlin (1986) explains communicative competence as:

The ability to create meanings by exploring the potential inherent in any language for continual modification in response to change, negotiating the value of convention rather than conforming to established principle… In sum, … a coming together of organized knowledge structures with a set of procedures for adapting this knowledge to solve new problems of communication that do not have ready-made and tailored solutions (p. 40).

According to this definition, Candling did not incorporate strategic competence as a major component of communicative competence. Instead, he focused on compensatory strategies as center of his framework.

Later on, Bachman (1990) described the framework of CLA involving three main components: Language competence, strategic competence, and psychophysiological mechanisms. In this model, strategic competence can be characterized as the mental repertoire for implementing the components of language competence in contextualized communicative language use. Based on this definition, strategic competences is considered as the mental capacity to establish a link between components of language competencies to features of the context of situation. Strategic competence includes three main components: assessment, planning, and execution.

However, Bachman and Palmer (1996) proposed strategic competence as a set of metacognitive components or strategies, which can be referred to as higher order executive processes that provide a cognitive management function in language use, as well as in other cognitive activities. Three general areas in which metacognitive components play a major role: goal setting (deciding what one is going to do), assessment (taking stock of what is needed, what one has to work with, and how well one has communicated), and planning (deciding how to use what one has).

It should be pointed out that Both (Bachman 1990; Bachman and Palmer 1996) did not present the same definition for description of strategic competence. The difference is attributed to the definition that Bachman and Palmer (1996) provided that strategic competence is considered as metacognitive strategies in which each individual use the available online resources to regulate emerging cognitive process to achieve their communicative goals. Hence, the model of strategic competence proposed by Bachman and Palmer (1996) has centered throughout this study.

Given the definition of strategic competence proposed by Bachman and Palmer (1996) as a set of metacognitive strategies, many researchers have defined metacognitive strategies differently. For example, Flavell (1987) defined metacognition comprising two main components: knowledge about cognition which refers to learners’ awareness of his/her strategy use when involving in activities and regulation of cognition which refers to regulation of cognitive process and use of strategies to achieve a goal. Furthermore, Purpura (1999) conceptualized metacognitive strategy as a set of conscious or unconscious process which is directly or indirectly connected to language use and has executive capacity.

There are also some empirical studies (Purpura 1997; Phakiti 2003, 2008a, b) on the nature of strategic competence. The result of these studies have broaden the scope of strategic competence beyond a set of metacognitive strategies.

Purpura (1997) investigating the relationships between strategy use and second language reading comprehension test performance. He argued that strategic competence notion must go beyond a set of metacognitive strategies because use of target language requires cognitive, affective and social strategies as well as metacognitive strategies.

In another study, Phakiti (2008b) examining the relationship of test-takers’ long-term strategic knowledge (i.e., trait strategies) and actual strategy use (i.e., state strategies) to L2 reading test performance over time. The findings demonstrated that strategic competence needs to include both strategic knowledge and knowledge about cognition as two facet of that model. In other words, he stated that strategic model encompasses both cognitive and metacognitive strategies, in which metacognitive strategies function over cognitive strategies.

Whether strategic competence is a set of compensatory strategies, cognitive, metacognitive, affective, or/ and social strategies remains to be known. The empirical studies are too few to yield conclusive results on the nature of strategic competence. Moreover, these studies have examined the ingredients of strategic competence utilized in reading comprehension. The nature of strategic competence with reference to SL/FL writing has remained underexplored. To top it off, the scanty studies have been quantitative in nature.

This research study applied GT as qualitative approach. One of the reasons for selection of qualitative method is that “Qualitative methods can be used to explore substantive areas about which little is known or about which much is known to gain novel understandings” (Strauss & Corbin, p. 11). Creswell and Chalder (2002) explained that the grounded theory approach is appropriate “when you want to develop or modify a theory, explain a process, and develop a general abstraction of the interaction and action of the people” (p. 456).

Exploring the validity of strategic competence respecting SL/FL writing required an examination of this issue from students’ cognitive processes in order to establish the validity of model. Therefore, the qualitative approach in the present study, GT which “gives priority to the studied phenomenon or process – rather than to a description of a setting” (Charmaz 2006a, b, p. 22) allowed us to develop a grounded model from the writing processes of EFL students in more theoretical terms. Employing grounded theory led us to address validity issue of the ingredients of strategic competence in EFL context with reference to emerging processes involved in GPT.

Graph comprehension

In graph writing research, many researchers presented different models of processes of graph comprehension and grapg writing.

Pinker (1990) proposed a model of graph comprehension. To propose his model, Pinker (1990) first defined visual array (italicized in original) as those early visual representations displaying the input in a relatively unprocessed, pictorial format. But this kind of information is not sufficient to comprehend the graph. As Pinker (1990) pointed out more representational format was needed to link with the memory representations incorporating knowledge of what the visual marks of the graph conveyed. Another term was used in his model is visual description (italicized in original) alluding to the structural description delineating a graph, and visual encoding processes denote the mechanism that generates a visual description from a visual array pattern. Another important structure in this model is graph schema. According to Pinker (1990) graph schema tackle three important tasks: 1) determining what kind of graph is being viewed, 2) finding appropriate and relevant pieces of information in the graph, 3) converting the information embodied in visual description into quantitative information in conceptual message. Besides, Pinker (1990) described four procedures to comprehend the structures that depict graphic information. The first refers to match process (the term taken from Anderson and Bower 1973, the theory of long-term memeory) that compares a visual description with every memory schema and applies that schema to recognize specific types of graph (e.g., bar graph). The second represents a message assembly process that utilizes graph schema to generate a conceptual message. The third includes an interrogation process that regains or encrypts new information, which is not currently in conceptual questions, and the last consists of a set of inferential processes that execute mathematical and inference rules on quantitative information in conceptual message (e.g., computing the rate of decrease or increase of one variable, subtracting one value from another) or to infer from context of graph (e.g., the paragraph in which it is incorporated).

Kintsch (1988) provided a useful framework as Construction-Integration (CI) model of text and discourse comprehension to investigate the influence of graph display and prior knowledge on graph comprehension. According to the CI model, visual display characteristics including format and color influenced both the low-level perceptual aspects of graph comprehension as well as the high-level cognitive processes. Freedman and Shah (2002) elaborated on CI model and suggested that knowledge-based graph comprehension comprised an interaction between top-down and bottom-up processing which are not only affected by graph characteristics but also by several type of knowledge such as graphical skill, domain knowledge, and explanatory skill.

Moreover, some studies (Carpenter and Shah 1998; Pinker 1990; Shah and Carpenter 1995) have described the model of graph comprehension including three main processes: encoding visual array; identifying relations of the feature; and relating quantitative relations to the graphic variables. These three processes are iterative in a way that visual patterns are encoded, quantitative facts are identified, and they are related to graphic referents (Carpenter and Shah 1998). In terms of relating quantitative relations to graphic variable, Shah et al. (1999) focused on identifying quantitative facts or trends from a graph. They found out that graph interpretation involved pattern perception and association process and a more complex process such as inferential processes. Moreover, the results illustrated that perceptual organization of data had an effect on interpretation of graphical information. Another study conducted by O’Loughlin and Wigglesworth (2003) have shown that quantity and manner of presentation of information may affect the difficulty of graph-based writing task.

Recently, Yang (2012) conducted a questionnaire-based study to examine test takers’ strategies in relation to their performance on graph writing tasks. The results demonstrated three main strategies: graph comprehension, graph translation, and graph interpretation. However, these product-oriented study did not provide full insight into the mental processes of test takers while completing graph tasks. In his paper, Yang (2012) acknowledged that “more qualitative analyses, such as verbal protocol and eye tracking data offer more insight into writers’ mental operations in responding to the task”.

Therefore, Yu et al. (2011) conducted a qualitative study to examine the cognitive processes affected by the use of different graphs, their graphic skills, English writing abilities, and the short training of test takers taking IELTS-GPT. Three main processes have been explored: comprehending non-graphically presented task instructions, comprehending graphic information and re-producing graph comprehension.

Although the model of Yu et al. (2011) provides some useful insights for description of general processes of different types of graph, their model has been limited in that it does not fully reflect the way in which graph-writing processes varies according to the type of the graph and does not distinguish the cognitive processes in each type of the graph distinctly. Accordingly, Yu et al. (2012) stated that the use of different graphic prompts (e.g., line graph, bar graph, pie chart, map, and flow chart) can trigger different forms of “cognitive naturalness” in different stages of writing.

The present study improves upon the existed related literature in two ways: unlike the quantitative studies of Phakiti (2008b) and (Purpura 1999) which embarked upon validating Bachman and Palmer’s (1996) strategic competence in light of reading comprehension, the present study was designed to explore elements of strategic competence in graph-writing task through a process-oriented approach. Moreover, previous studies conducted on graph comprehension did not specifically investigate the cognitive processes of bar graph writing task with reference to strategic competence model. Even the model proposed by (Yu et al. 2012) did not consider the specific process of each graph’s format separately; instead, they focused on graph comprehension theories rather than theory of strategic competence presented by Bachman and Palmer (1996). Concerning this point, we aimed at validating the model of strategic competence not only by analyzing the written scripts of GPT but also test takers’ cognitive processes through GT approach.

Therefore, the following question guides the present study:

  • ➢ In what strategic processes were students involved while completing GPT?

Method

Brief introduction to constructivist grounded theory

Charmaz (2008a) views constructionist approach in that neither data nor theories are explored. Rather, both researchers and participants are part of the world in which grounded theories are constructed through active role of researchers, views and voices of participants. In this paradigm, the resulting theory is an interpretation of studied object depending on reseracherʼs view, not outside of it.

Charmaz (2014) proposes different strategies which grounded theorist should undertake. First, data collection and analysis is carried out simultaneously. At this stage, theoretical sampling is used to collect further data that account for variations and similarities emerged from tentative categories. Second, data are analyzed through coding to look for processes and actions rather than themes and structures. The coding procedures consist of four phases: initial coding, focused coding, axial coding, and theoretical coding. The aim of initial coding is “to remain open to all possible theoretical directions indicated by your readings of the data” (Charmaz 2006a, p. 46). In initial coding, data are divided into separate segments, looked for, and compared with other pieces of data to explore similarities and differences (Strauss and Corbin 1998). Concurrent with initial coding, In Vivo coding is used to “preserve participants’ meanings of their views and actions” (Charmaz 2006b, p. 54) and “provide a crucial check on whether you have grasped what is significant” to the participant, and may help “crystallize and condense meanings” (p. 57).

The second stage of coding includes focused coding, axial coding, and theoretical coding that occur respectively. In focused coding, we looked for the most frequent or significant initial codes to “develop the most salient categories” in the data corpus which “requires decision about which initial codes make the most analytic sense” (Charmaz 2006a, p. 46, 57).

Then, in axial coding, the developed categories are associated with subcategories at the level of dimension and properties to represent a more comprehensive and accurate description of phenomena (Strauss and Corbin 1998). Strauss and Corbin (1998) state “When analysts code axially, they look for answers to questions such as why or how come, where, when, how, and with what results, and in doing so uncover relationships among categories” (p. 127). During the coding processes at all phases, constant comparative analysis is employed. According to Spiggle (1994) “comparison explores differences and similarities across incidents within the data currently collected and provides guidelines for collecting additional data. … Analysis explicitly compares each incident in the data with other incidents appearing to belong to the same category, exploring their similarities and differences”.

Finally, researchers draw on data to develop new conceptual categories, specify possible relationships among categories and subcategories developed in axial coding. Then, the inductive abstract analytic categories are developed that account for variations and commonalities in the categories to construct theory, instead of applying current theories.

Participant

Sampling in qualitative research is not carried out in a way to represent a population or enhance statistical generalizability of our findings (Charmaz 2006a, b). The purpose of sampling in GT, particularly, is two-fold: Initially, participants with similar characteristics are selected to define categories and their properties. Later, theoretical sampling is used to identify pertinent data in an effort to refine and elaborate on emerging tentative categories in order to compare similarities and differences of categories along their properties (Charmaz 2006a, b).

A total of 8 participants in two samplings of four and four students consisting of two males and six females participated in this study. Initially, we sought a sample of participants who studied English Language Teaching at Vali-e-asr University of Rafsanjan and had experience of teaching English. The first four participants were selected non-randomly, through convenient sampling. This procedure was applied to identify the initial group of participants and their tentative emerging categories. As this study progressed, we used theoretical sampling to identify additional participants. The emergent categories form the basis of theoretical sampling.

Charmaz (2008b) states that

“Grounded theorists cannot anticipate where their theoretical inquiry will take them. Their tentative categories arise through the analytic process, and thus theoretical sampling may take them into new research sites and substantive areas” (p. 167).

As we jointly collected and analyzed the data drawn from the first four participants, we decided what to collect next in order to develop and refine tentative categories. Thereafter, four other participants were selected through theoretical sampling enabling us to examine, qualify, and specify relationships among emerging categories of these two groups of participants.

Instruments

To gather data to reveal mental processes of students explicitly, we used four different types of instruments: audio/video recorder, observation, stimulated recall interview, and sampling of compositions (e.g., GPT).

Procedure

Procedure

Methods of data collection aimed at providing detailed and descriptive information about cognitive processes of subjects with reference to GPT (See Additional file 1: Appendix A). In GPT, as an integrated writing task, candidates are asked to “describe some information (graph/chart /table/diagram), and to present the description in their own words”. It is recommended that candidates should spend 20 min on this and write at least 150 words. Test takers need to “organize, present and compare data, describe the stages of a process or procedure, describe an object or event or sequence of events, or explain how something works” (IELTS Handbook 2006, p. 8). This task also asked test takers not only to comprehend the graphic prompt but also to re-generate the information in their own words (Yu et al. 2010).

Before going through the first phase of think aloud procedure, the first researcher as a teacher practiced thinking aloud on three different types of writing tasks (See Additional file 1: Appendix A). She verbalized my thoughts three times for each task to find problems related to the use of think aloud protocol and practiced how to model think -aloud protocol for the subjects under study.

Moreover, a pilot study was carried out with one student to assess efficacy, time, and feasibility of using think-aloud protocol in this study. She was given a graph-writing task (See Additional file 1: Appendix B) similar to the one implemented in the real task session and was requested to verbalize processes and strategies which she used to respond to the tasks.

Then, in a warm-up session, the first researcher instructed students how to go through verbalization of tasks (See Additional file 1: Appendix C). She modeled concurrent think-aloud using a bar graph writing task (See Additional file 1: Appendix B). Furthermore, each subject was given two tasks with their answers (Additional file 1: Appendix B) to be familiar with the format of tasks and help them to self-assess their available strategies or knowledge required to complete the real tasks later. Then, they were given the real tasks one by one (See Additional file 1: Appendix D). The procedure of data collection was shown in the following diagram:

The first phase of data collection included step one, step two, and step three as indicated in the diagram. The first four participants who were chosen through convenient sampling were given the bar-graph writing task, then asked to verbalize their thoughts while completing GPT (See Additional file 1: Appendix D). They were asked to verbalize (in English or Persian) their thoughts and explicate processes in which they were involved to complete the task (e.g., step 2). Each think aloud session lasted about one hour and 15 to two hours and 30 min. Their verbalizations were audio/video recorded. At the same session, they were exposed to their recorded videos to watch and recall what processes they employed to complete GPT (e.g., step 3). Their interviews were audio/video recorded. Each interview lasted about 45 to 55 min.

All the think-aloud and interview protocols were transcribed verbatim. Their transcriptions were divided into different segments. Then, they were classified into categories and subcategories (e.g., step 4). Constant comparative analysis was made among the emerging categories to find similarities and differences among them underlying cognitive processes of students (e.g., step 5). The emerging categories were tentative and did not reflect a full understanding of a processes they applied. Thus, the second reseracher used theoretical sampling to examine these categories through selecting 10 more students (e.g., step 6). Theoretical sampling helped to elaborate and refine the emerging categories and gain more understanding about the processes all students employed.

Concurrent with think-aloud and stimulated recall interview at the same session during the first and second phase of data collection, how students interacted with the task and responded to the task were observed and noted completely. The second researcher noticed the wide range of processes which they utilized to compose GPT and VPT differently. She took field notes of the 15 observation sessions. Observations were conducted for each participant on GPT at a time for 55 to 2 h and thirteen minutes, which were then coded and recoded.

Data analysis

Grounded theory methods of analysis comprises coding at four phases: initial coding, focused coding, axial coding, and theoretical coding. The coding procedure involves identifying what the data illustrate and assigning them a name that concurrently classifies, summarizes, and takes each piece of data into account (Charmaz 2006a). According to Charmaz (2006a, b), coding is the main link between data collection and developing a theory to account for the data under study.

During the coding procedures, researchers are involved in “a process of constantly analyzing data at every and all stages of the data collection and interpretation process [that] results in the identification of codes” (Jones et al. 2006, p. 43–44).

In this study, we followed a coding procedure proposed by Saldana (2012) and Charmaz (2006a, b). The coding procedure was carried out in two cycles: the first cycle of coding (e.g., initial coding) and the second cycle of coding (e.g., focused coding, axial coding, and theoretical coding). Besides, qualitative software, NVivo 10, was used to import audio/video recordings, transcribe them, code all the materials, visualize, and summarize the codes into categories and subcategories.

The stages of coding are shown in the following Fig. 1:

Fig. 1
figure 1

Procedure of analysis and coding

First cycle of coding

Initial coding

I divided the transcription of audio/video recordings, observation notes, and written samples of both tasks (e.g., GPT and VPT) into separate segments. I deeply read each line of segments. Each line was assigned a code that accounted for implicit meaning and explicit statements invoked by participant. As I coded each line of segments of transcriptions, I noticed to words and phrases that were generated by participant themselves. Concurrent with line- by-line coding, these words or phrases were labeled through using their own words. This practiced is called in vivo coding that accounted for participants’ their own meanings and processes and illustrated the taken-for-granted assumptions that students might have been invoked to verbalize their processes. I went back and forth between line-by-line coding and in vivo coding. Every time I read the segments of data, I came across with some phrases which were students’ their own languages and words. As I coded these segments of data using students’ their own words, I reviewed line-by-line codes to ensure that all the data including hidden assumptions, implicit meanings, and explicit statements were accurately coded either through line-by-line or in vivo coding.

Then, we used constant comparative analysis to establish analytic directions toward the similarities and differences among the emerging categories. We also raised some questions to examine critically and analytically the emerging processes. The questions included:

  • What process is at issue here?

  • How can we define it?

  • Why did the participants apply the processes?

When we coded each line and assigned a name, we gained insight about collected data which could direct us to do further inquiry throughout data collection process. For instance, when we explored the cognitive processes of the last participants, we returned to earlier test takers to examine whether processes stated by the last respondents shared commonality or variation with processes that earlier test takers applied. This informed us of the potential meanings evoked by the words of respondents and helped us to get insight to the nuance of students’ meaning to find commonalities and differences among their emerging categories. In this study, there are 80–140 segments coded for each participant, constituting 1596 codes developed and labeled for GPT.

Second cycle of coding

Focused coding

According to the Charmaz and Smith (2003), “moving to focused coding is not entirely a linear process” (p. 96). As indicated in the Fig. 1, there was parallel relationship between first cycle of coding and second cycle of coding. I returned to earlier codes emerged in first cycle of coding to identify the nuance of processes students applied which might have been ignored or unexpressed. Then, I paid meticulous attention to language and deeply reflected on the emergent meanings underlying students’ mental processes. I used re-coding four times to refine the categories, subsume by other codes, relabel, or drop them. Some categories were re-coded to be replaced with more accurate words or phrases explored throughout the procedure of analysis. Some codes were integrated together because they illustrated similar concepts. Some other codes were eliminated because they were later conceived “redundant” and “marginal” (these two terms were drawn from Lewins and Silver 2007, p. 100). I also assessed which codes best could demonstrate what was happening in the data. Concurrent with identifying the salient categories, I compared each categories with each other and with the data to refine them and find the major categories that best fit the data.

Axial coding

The axial coding comprises extending analytic work of segments of data coded in initial coding and focused coding. The categories coded at previous stages were compared with other categories and subcategories to find the relationships among them (e.g., step 3) and generate the common pattern underlying processes of GPT. I also reflected on the interrelationships between categories and subcategories to assess which of those relationships best captured what was happening in the data, and then I raised them at the next stage to develop analytic models. The numbers of categories and subcategories were reduced to six and 16 for GPT.

Theoretical coding

In initial coding, I made attempt with line-by-line and in vivo coding to name and categorize the segments of the data. During focused coding, the most significant categories that best demonstrated the data underlying cognitive processes of students were identified. Then, I used axial coding to compare different categories and subcategories across the data to establish connections among them. To develop the common pattern of processes involved in GPT, I reviewed the codes developed during first cycle of coding, focused coding, and axial coding to explore the commonality and variations among the categories and subcategories to describe the network of interrelationships from the data. As the main relationships among categories and subcategories were evolved and recognized, I started writing memos underlying each category, subcategory, and their interrelationships to find the main theme underlying wide range of the processes students applied in both tasks differently. Then, I sorted memos to do theoretical integration of emerging categories to refine comparisons among those relationships to construct the common pattern grounded in data drawn from GPT. I also used diagramming to provide a visual description of categories and subcategories to integrate them into one coherent model. I also moved back and forth between writing memos and drawing diagrams to realize where each category had a strong or weak relation to each subcategory and made attempt to refine the interrelationships underlying emerging processes in both tasks separately. Writing memos, sorting them, and drawing diagrams informed me a common pattern among the six major categories and 16 subcategories in GPT representing the cognitive processes in which participants were engaged to complete the task.

Establishing trustworthiness

As it is conventional in grounded theory, a researcher attempts to provide evidence whether his or her findings are accurate statement of research problem under study (Glaser and Strauss 1967). In this study, we employed four criteria proposed by (Lincoln and Guba 1985) to establish trustworthiness: credibility, transferability, dependability, and confirmability.

The procedure of establishing trustworthiness is presented in the following Fig. 2:

Fig. 2
figure 2

The diagram of establishing trustworthiness

Credibility could be established through different ways: triangulation and continuous observation. Triangulation enabled us to sort out the data to explore the common theme through use of multiple methods such as observations, interviews, think a loud protocol, and samples of their witting. To achieve transferability, we drew on “descriptive adequacy” proposed by (Lincoln and Guba 1985, p. 501) to provide rich, completed, and detailed description of participant, methodology, results, and emergent theory, which has been done throughout this study. To attest the dependability of procedures, I used audit trail to examine whether the findings were appropriately gathered and grounded in the data that were collected. Confirmability was obtained to examine whether the data collected and conclusion drawn by second researcher could be confirmed by the first researcher who is knowledgeable with grounded theory. Therefore, the data and analysis were reviewed together and confirmability was established.

Results

The findings explaining cognitive processes and its relation to subprocesses consist of Analyzing Non-Graphic Information (the term was drawn from Yu et al. (2011), Translating Graphic and Non-Graphic Information into Written Discourse, Examining graph comprehension, Retrieving Personal Interpretation and Additional Reasoning, Reformulating Graph Description into EFL Written Discourse.

Analyzing non-graphic information of Instruction

All the participants tried to analyze non-graphical instruction’s information presented before the graph through subprocesses such as reading the allocated time, reading the preliminary section, underlining key elements, questioning, and reading the minimum number of words in a reiterative way. When they reread the introductory part several times, they identified more information, and questioned the elements of instruction.

For example, participant A verbalized as follows:

Extract 1:

Writing task1. let see what task is. you should spend 20 mins on this task. so it’s a graph. what are the instruction here. the graph compares the percentage of international and uk stuents so gaining second class degrees or better.

Participant S verbalized as follows:

Participant D verbalized as follows:

Extract 2:

You should spend about 20 min on this task. The graph compares the percentage of international and UK students gaining second class degree or better at a major UK university. Summarize the information by selecting and reporting the main features (highlighting). Therefore, I do not have to write details, just a report. Write at least 150 words like an abstract of paper. I reread the title carefully to identify what instruction means. Yes, it is comparing UK and international students gaining second class degree. What does second class degree mean?

As the two researchers reviewed verbalizations of students underlying processes they were involved in examining different non- graphic information presented in the instruction, they identified one dominant pattern among their processes that consisted of reading introductory sentence, reading summarize the information to write a report, reading the time allowance, reading the expected length. Moreover, as students were engaged in reading different non-graphic information in the instruction, they underlined the important key element including introductory part and selfquestioned this part to comprehend what it meant.

Examining graphic information

Having comprehend the non-graphic information, all test takers started to examine graph features (e.g., captions, colors, title, two axes, and percentages) and integrate them with pattern underlying the graph to make comparison/contrast, categorize graphic features, and retrieve graphicacy skill and explanatory reasoning to identify significant information/pattern upon which they inferred personal interpretations (personal interpretation is not required by GPT). All the participants were involved in these iterative processes to interact with the next process (e.g., reformulating and representing graphic and non-graphic features into written discourse as a foreign language) and the process of translating graphic and non-graphic information through note taking. As they were constantly assessing different graphic features, they took some notes or sentences that enabled them to categorize and made comparison and contrast among identified graphic features. While they were integrating graphic referents with pattern underlying the graph, they used their personal interpretations to identify significant pattern or information and additional reasoning to make inferences about difference between information presented in the graph. These two subprocesses processes including identifying significant information and inferring also grouped under another main process, which is called Retrieving personal interpretation and additional reasoning.

Interpreting graphic refrents

The next strategy includes finding graph characteristics. This strategy consists of ‘reading caption’, ‘reading title’, ‘scanning axes’, and ‘scanning colors’. As the concepts derived from the think-aloud protocols, interviews, and observations demonstrated, all the participants were engaged in these subprocesses to explore the graph features and understand the information given in the instruction. Since they reported that the concept of one phrase in instruction (second class degree) was not so clear, they reread the instruction several times and looked for that information in the graphic referents. F

For example, test taker B stated

Extract 3:

I do not know what second class degree means here. After I read the two axes and title I understand that it is international students.

Participant R identified the exact percentage of each group in each major

Extract 4:

I am reading title of graph. Well, in graph, there are majors. The range of UK and international students is different. In nursing, students are equal, about 73%. In electrical engineering, UK students are 60% and international students are 80%.

Additionally, all participants interacted with identifying graph referents and integrating graphic referents to graphic functions. The more they recognized graphical referents, the more they related them to graphical functions to recognize a pattern.

Integrating graphic function with graphic referents

This strategy consists of four subcategories: ‘clustering/ classifying graph features through use of arithmetic information, color and number of bars’, identifying significant information using graphicacy skill’, ‘comparing/contrasting graph features’, and ‘inferring by using explanatory reasoning’. At this stage, some participants started by relating what they have found as graphic functions (e.g., one bar is higher/lower/equal than others) to the identified graphic referents such as quantitative features presented in the graph. It should be noted that the two strategies including identifying significant information and inferring are also affected by another main strategy called Retrieving Personal Interpretation and Additional Reasoning.

Clustering/Classifying graph features

All participants categorized and clustered features of the graph. Most of them used clustering to classify the information presented in two axes through using arithmetic information (e.g., percentages).

For example, test taker H verbalized as follows:

Extract 5:

I used percentage to identify which element is lower or higher. About 70% of both groups of students study nursing. In engineering, UK students are 60% and international students are 80%. The next percentage is technology ranging from 58 to 83%. The next percentage belongs to literature which is 78% for British students and 58% for international students. The next is art including about 81% of UK group and 78% of international group.

Participant E explained as follows:

I have three groups, so I have to categorize each element sharing communality with others in one group. International students are in one group and UK students in one group. Then, I compare them”.

Participant N also verbalized as follows:

Extract 6:

Those with similar features and equal percentage are grouped. Thus, I put number one above this group. Those who UK students are more I name B. I am classifying based on percentage. Group C including English literature, art, law, and sociology in which UK students are more than International students.

As it is evident in the above extracts, test takers followed the common processes such as scanning the two axes including different majors and percentages as well as captions to classify the information presented in these axes and instruction in order to identify similarities and differences among them.

Comparing/Contrasting graph features

Most of the participants compared and contrasted the graph’s elements to recognize a pattern underlying the graph. They were involved in multiple retrieval and comparison processes to make inference or reasons for differences and similarities they found among different categories of graphic information.

Test taker M verbalized as follows:

Extract 7:

I make a comparison between majors in which British students are more frequent and I show them with a mark and majors in which international students are more and I show them with a star. Then, I want to relate the identified percentage of each element to each group, but as I see it is not possible. Since the exact percentage was not obvious, I have to write which element is higher/lower than another element.

Participant N explained as follows:

Extract 8:

As I see in the graph, UK students are better in social science. Nursing is related to applied science, but accountancy is not. I can make an inference here that UK students are more successful in social science majors, which are not technical such as law, art, English literature, and sociology. It is interesting to say that UK students are more frequent in English literature about 20%, which is not scant.

Test taker M verbalized as follows:

Extract 9:

I can examine majors based on different things such as language, culture, and practice. Based on language, some majors related to language and mind are literature, art, and accountancy. If I want to compare, there are four majors related to mind. I want to identify which group performed better in majors related to language including art and literature. Did the UK students perform better in sociocultural majors? Yes, because they were more familiar with their own culture.

On the whole, all the test takers juxtaposed different categories of graphic information to identify the commonalities and variations among them. To this aim, they applied the length of bars to shows which bar was higher or lower than other bars, drew a line from x-axis to y-axis to specify the exact percentages, and counted the number of bars to identify which group of people as indicated in the captions of graph outnumbered the other groups to make comparison and contrast between them.

Retrieving personal interpretation and additional reasoning

Most of the students employed the identified similarities and differences among different categories in order to identify a significant pattern in the graph. Moreover, they used their personal interpretation (not required in GPT) to make inferences about the relation or pattern they identified in the bar graph. It should be pointed out that the word “significant point or pattern” was stated by most of the test takers to explain the reason upon which they attempted to interpret the information in the graph.

Identifying significant information

Four participants used their graphical skills to find a significant feature based on which the graph was described. Participant B employed her background knowledge to classify the given information in the graph. This graphicacy skill refers to retrieve the background knowledge related to the graph description.

Test taker B verbalized as follows:

Extract 10:

Based on what I have learnt in writing class, I have to describe and classify the information given in the graph based on a significant element catching my attention.

Participant H stated as follows:

Extract 11:

Before I want to describe information in the graph, I have to find a significant point, but the first point which catches my attention is that there is no big difference in the performance of the two groups.

These extracts of think-aloud protocols showed that the phrase significant information/ pattern/point often used by these participant. When they juxtaposed different categories of graphic and non-graphic information, they identified the potential differences and similarities between each category by using difference between percentages. Then, they reviewed these comparisons/contrasts to come across with a significant pattern underlying them. Added to this, they also tried to made personal interpretations to infer the reason that the pattern intended to prove in the graph, particularly to the readers.

Inferring

In this processes, all the participants provided more personal reasoning and explanations (not required by GPT) to make inferences about what the non-graphic information and graphic information intended to convey to the readers. Therefore, four participants provided the possible interpretation of data by suggesting additional analyses and inferences.

Participant D verbalized as follows:

Extract 12:

I want to reason why British people pay attention more to literature. I inferred this reason that there is direct relationship between humanity and culture.

Participant H verbalized as follows:

Extract 13:

Based on these percentages, I can infer that UK students have better performance in social science in comparison to international students. Moreover, I can say that international students get more success in applied science majors more related with mathematics.

As it is evident in these extracts of processes that test takers applied, they tried to make additional reasoning and explanations which were not represented in the graph and instruction explicitly to make inference about the information presented in the graph. Although it was not required by GPT to make additional reasoning or personal interpretation, test takers deeply focused on the graphic information to comprehend why there were differences between categories of features or information they made in the graph.

Translating graphic and non-graphic information into written discourse

Five participants summarized the graphic and non-graphic information given in the instruction and the bar graph either as some key words or as some sentences. Through this strategy, the test takers were also engaged in identifying graph features enabling them to take notes. As they found more information, they tried to categorize them to derive a pattern to describe.

Participant H stated as follows:

Extract 14:

As I read instruction I take notes, then I find the percentage of each element in the graph to summarize them, make comparison, and find a pattern.

Test taker G stated as follows:

Extract 15:

I found what information the graph explains, so now I summarize them not to distract with abundant and unorganized ideas while I want to write.

In light of these processes invoked by participants, the dominant process that most of them used was note taking through which they could summarize both non-graphic information of instruction and graphic information including difference between percentages and information presented on x-axis. Note taking helped them to identify the exact percentages upon which they categorized all the information of two axes and juxtaposed them to find a main point from them.

Reformulating graph description into written discourse

As test takers examined their graphic and non-graphic comprehension, they summarized the identified elements as some key notes or sentences. In order to improve the quality of their compositions in terms of linguistic accuracy (e.g., grammar, lexis, spelling, and punctuation), coherence and cohesion, they represented their responses through rereading and rewriting. Rereading helped them to monitor their comprehension, trigger new ideas, and regenerate the written information. Moreover, rewriting enabled them not only to edit the linguistic aspects of written text in terms of punctuation, grammar, spelling, vocabulary, and coherence but also revise the written features and replace them with new information.

Participant D explained as follows:

Extract 16:

Rereading helps me to provide a smooth transition between paragraphs and reexamine the graph main features to check my comprehension. I also rewrite some words which are repetitive. I reread the text. Here, the sentence is not finished. Ok, I wrote it wrongly. I have to change the word on the other hand to however. This revision makes that sentence better.

Participant B stated as follows:

Extract 17:

As I read silently I check the graph. It is a way of matching the information by rereading my text and looking at the graph. Now, I want to check my writing, I start to reread. Before starting to reread, I count the number of words. Ok. I reread to find out what new ideas come to my mind. Ok. One word is revised by replacing with another word. I add this word because the number of words was insufficient. I also check the graph to examine whether I explained the significant features

Test taker E explained as follows:

Extract 18:

I read the whole text to check the appropriacy of content, sequence of paragraphs, and check the ambiguity of sentences. I also rewrite some words lexically. Now, I reread silently to evaluate my comprehension and revise the text. I look at the graph to check what I have written corresponds with the graphic information. The percentage of nursing is 60%. Yes, it is correct. I want to use instruction’s phrase in my own words.

These extracts show a set of common processes that the test takers were involved in re-producing the graphic and non-graphic information into accurate and coherent written discourse in terms of use of vocabularies, grammatical points, spellings, and punctuations. In addition to checking the linguistic accuracy of their written texts, they monitored their witting performance and evaluated their comprehension of information presented in instruction and in the graph to refine what they have written and identify new idea in order to represent an organized and accurate written report through both rewriting and rereading.

The GPT model grounded in the data

All the emerged processes were juxtaposed to identify a common pattern among them upon which most of the test takers drew to produce their responses. To this aim, I reviewed different transcriptions’ of students including think-aloud and interview protocols as well as observation’s notes. I took notes about each process to understand what each of the processes intended to convey and what relationship could exist among different processes. Then, I classified them into different categories and subcategories and drew different diagrams to show whether or not there was any common pattern or relationship among all the processes. Diagramming illuminated a dominant pattern among the six main processes and their relationship with subprocesses. These six categories consisted of Analyzing Non-Graphic Information in Instruction (reading allocated time, reading preliminary section, reading summarize the information…, reading expected length, underlining, and selfquestioning), Translating Graphic and Non-Graphic Information into Written Discourse, Recognizing Instruction’s Information in the Graph, Examining Graph Comprehension (classifying, comparing/contracting, identifying significant information, inferring), Retrieving Personal Interpretations and Additional Reasoning (identifying significant information and inferring), and Reformulating Graph Description into EFL Written Discourse (rereading to monitor test performance, evaluating comprehension, identifying new ideas through rereading, revising and editing written texts through rewriting). These processes as well as subprocesses were grounded in the data taken from students’ verbalizations and observations’ notes. Then, all the processes constituted a unified model underlying common pattern of cognitive processes the test takers employed to complete GPT. A visual representation of the grounded model is included in Fig. 3.

Fig. 3
figure 3

Grounded model of cognitive processes in GPT

Discussion

The goal of this study was to use GT to describe how students were engaged in describing GPT upon which the model of strategic competence was validated. Eight post-graduate students participated in this study. Four types of data were collected: think-aloud protocol, stimulated recall interview, observation, and written samples. Five emergent categories constitute the findings of this study: analyzing instruction (e.g., reading allocated time, reading introductory parts, underlining key words, selfquestioning, reading minimum number of words,); translating graphic and non-graphic information through note-taking; examining graph comprehension (e.g., interpreting graphic referents and integrating graphic referents with graphic functions through classifying the identified feature, comparing/contrasting, identifying significant information, and inferring); retrieving personal interpretation and additional reasoning (e.g., identifying significant information and inferring); and reformulating graph description into written discourse (e.g., revising the graph main features/editing the written text, monitoring and evaluating, rereading, and counting number of written words).

The aim of this study was to find out whether the emerging processes in GPT grounded model validated the components of strategic competence proposed by Bachman and Palmer (1996). Bachman and Palmer (1996) discussed strategic competence as a set of metacognitive strategies including three main components: goal setting, assessment, and planning. On the other hand, Phakiti (2008a) stated that both strategic knowledge (knowledge about what, how and when to apply a set of strategies) and strategic regulation (online realization and regulation of cognitive processing) form the theory of strategic competence. Likewise, Purpura (1999) explored that strategic competence presented by (Bachman and Palmer 1996) must be expanded beyond a set of metacognitive strategies because students utilize cognitive, affective and social strategies, along with metacognitive strategies when they apply the target language. In line with (Phakiti 2008a; Purpura 1999), the grounded model of this study presented that cognitive strategies, metacognitive strategies constituted the strategic competence theory in light of graph-writing. For instance, the students invoked a set of cognitive strategies including underlining key words enabling participants to create structure for their composition and notice key elements to take notes. Another cognitive strategy is translation of both graphic and non-graphic information into written discourse in English paving the way for them to classify identified elements based on different perspectives, make a comparison between the identified elements in the graph and instruction, or recognize a pattern at the beginning of the main writing. Moreover, this strategy also gave them the means to cluster the information based on arithmetic information (e.g., percentage) and compute the difference between the quantitative relations to find a relation among bars. As they moved on identifying more features in the graph, they changed their classifications and take notes more specifically in order to refine the categories and explore the pattern underlying the classifications.

As we observed in Fig. 4, with reference to goal setting component as a general term, participants employed different mental subprocesses to set different goals to describe the GPT. For instance, test takers took different goals into consideration to integrate the graphic referents with graphic functions in the graph. They set a goal to classify and cluster information in terms of graphical referents such as title, caption, and two axes which is consistent with (Carpenter and Shah 1998) who suggested that the processes of pattern recognition consist of scanning the location of graph referents including (e.g., axes, caption, and title), then identifying a functional relation to describe the nature of x-y relations. In this study, all the participants identified the graphical referent, but four of them made planning through activation of their prior knowledge and graphicacy skill to describe the graph based on significant point as a different goal.

Fig. 4
figure 4

The procedure of sampling and data collection

According to the findings, test takers differ in their knowledge about the relationship between visual features and interpretations which led them to set different goals and plans. They made planning to retrieve their graphical skill to understand the relationship between two axes (e.g., X-axis and Y-axis) and different bars. For examples, some of students pointed out that the taller bar in the graph displayed higher percentage and computed the differences between percentages to report the underlying relations.

In addition to graph schema and graphical skill, they made different plan through utilization of their explanatory skill involving scientific reasoning or expertise in interpretation of data represented in the graph. Therefore, the test takers suggested additional analysis and reasons for the identified relationships in the graph, which were personally interpreted to convey their own meaning in relation to the elements of graph.

Another goal was making comparisons and contrasts among the graph’s feature. Therefore, they planned to employ the size of bars and arithmetic information (e.g., percentages) to formulate their responses. In line with CI model, when test takers do not possess domain knowledge, graphical skill or explanatory skill, they did inferential processes consisting of multiple retrieval and comparison substeps. This reason shows why the test takers categorized the identified features several time, compared them based on commonality or made a contrast based on differences in the quantitative relations.

Another component of strategic competence is assessment enabling test takers to be involved in different subprocesses to fulfill the assessment component. All the test takers assessed the task’s features through analyzing instruction consisting of recursive subprocesses such as reading the instruction, underlining, note taking, and self-questioning. Furthermore, they assessed the graph features through scanning x-axis and y- axis, reading caption, reading title, and scanning colors of bars. In line with Bachman and Palmer (1996), assessment component enabled test takers not only assess the task’s characteristics but also assess their relevant language knowledge (e.g., vocabulary knowledge, grammatical knowledge, and topical knowledge) to form their responses.. Reformulating their description in written discourse, they first assessed their graph comprehension and their written discourses, and then they set a different a goal to reproduce their responses. Besides, they evaluated their writing performance and comprehension through rereading instruction, revising, reviewing notes, and editing in all stages of their writing. In line with the working model of GPT proposed by (Yu et al. 2012), test takers were engaged in re-presenting graphic and non-graphic information in continuous discourse. In this study, test takers seek to assess the correctness and appropriateness of their responses, identify new information in the graph through rereading the instruction, rescanning the graph features, and inferring a new interpretation in order to reformulate their description in English as Foreign language (EFL) written discourse.

The last component of strategic competence is planning, which was categorized to different subprocesses applied by test takers. Based on this model, test takers tried to retrieve different aspects of language knowledge including knowledge of grammar and choice of vocabulary, cohesion and coherence, topical knowledge related to the topic of GPT, select the appropriate aspect of their relevant and available knowledge, and put them down into written responses. During their re-presenting of graphic and non-graphic information into EFL written discourse, they thought of structure, vocabulary, or world knowledge required to match their intended meaning in both rewriting, revising, and editing stage. The findings showed that the process of editing, rereading, and rewriting happened recursively. While composing and rewriting, they reread each paragraph regarding its content and edited it grammatically and lexically to approach what they aimed to convey. Some of students reported that they did editing more in rewriting than in first writing, since they reported that too much focusing on grammatical or punctuation aspects on first writing distract them from composing main points.

In the present study, one process called analyzing non-graphic information shared similarity with the process of “comprehending non-graphic information” presented by (Yu et al. 2012) underlying subprocesses through which test takers analyzed different non-graphic information in the instruction. In both processes, test takers moved back and forth among reading introductory part, reading summarize.., reading time allowance, and reading minimum number of words reiteratively to interpret and comprehend instruction’s non-graphic information.

Unlike Yu et al. (2012), the present grounded model of GPT includes two additional subprocesses used by students while analyzing instruction: underlining key words in instruction and selfquestioning different elements of instruction. These two processes differentiated our model from the model of Yu et al. (2012) in a way that test takers applied these two subprocesses (e.g., underlining and self-questioning) to identify more elements in the instruction and relate those elements with their memory representations of what the instruction conveyed . Besides, they assessed their domain of knowledge to select the appropriate and available element to form their responses and recognize unclear phrases through rereading instruction and examining the graphic information. Most of the participants read the time allowance and expected length which was in contrast with Yu et al. (2012) who reported that most of the students did not pay attention to time allowance and minimum number of words.

Inconsistent with Yu et al. (2012), test takers tried to translate both graphic and non-graphic information into written discourse through writing them down as some key words or sentences. As Yu et al. (2012) demonstrated all participants were engaged in re-producing graphic and non-graphic information into written discourse continuously. However, the present model confirmed that translating graphic and non-graphic information occurred at two stages: before main writing through note-taking and after examining graph graphic comprehension. While some students were reformulating their graph descriptions into EFL written discourse, they would come across with new features, take some notes, and return to their main writings to complete. During taking notes, they tried to classified them, specified difference between percentages on Y-axis, then made comparison or contrast among them to identify a relation or significant information underlying the graph. The most significant point that students took notes before main writing was to examine both graphic and non-graphic information to diagnose the pattern or significant information underlying the graph and integrate it with their English writing abilities to start their main writings (e.g., in reformulating graph description stage).

The next process is examining graph comprehension that involved two subprocesses: interpreting graphic referents (e.g., scanning title of graph, x-axis, y-axis, caption, size of bars, and colors; computing the difference between percentages) and integrating graphic referents with pattern or significant information underlying the graph. In line with Pinker (1990), in the present study, the test takers encoded the bar graph to explore the prominent visual features (e.g., one bar is higher/shorter than another bar). Then, they found arithmetic information (e.g., differences between percentages) that each bar is represented through scanning the axes, guessing the percentages, drawing lines, and subtracting the percentages from each other. Finally, they linked the percentages with graphic referents comprising the variables or referents represented on x-axis (e.g., different educational setting), y-axis (e.g., the percentages), colors of bars (e.g., white and black bars), size of bars (e.g., equal, higher, and smaller), and the title of graph (e.g., UK and international students gaining second class degree or better in 2009). These three processes occurred recursively as represented by (Carpenter and Shah 1998; Shah and Carpenter 1995) that participants switched between these process to encode visual array, identify quantitative facts, and relate them with graphic variables.

However, these models (Pinker 1990; Carpenter and Shah 1998; Shah and Carpenter 1995) were limited in which some process including inferential process was not investigated. They did not explore the specific processes participants applied to integrate graphic variables with pattern underlying the graph. The processes that they demonstrated were based on graph comprehension regardless of the students’ writing performance and their writing abilities which can affect their graph comprehension process. Additionally, they emphasized line graph comprehension invoking different processes in comparison to tasks including bar graph as a visual prompt. Instead, the present GPT grounded model focused on processes that test takers employ to describe the graph as well as English writing abilities. One of these process was making inferences. In order to reason for pattern or significant information underlying the graph, test takers tried to making inferences through drawing upon their personal interpretation and prior knowledge.

Furthermore, the knowledge-based graph comprehension presented by (Freedman and Shah 2002) fit with the GPT grounded model in this study. The model (See Fig. 4) shows that test takers interacted between integrating graphic referents with graphic pattern and retrieving personal interpretation and explanatory reasoning. Similarly, Freedman and Shah (2002) suggested that top down and bottom processing are affected by different types of knowledge consisting of graphical skill, domain knowledge, and explanatory skill. This is because some test takers retrieved different aspects from their prior knowledge to interpret the pattern underlying the graph as Carpenter and Shah (1998) explained “individual differences in graphic knowledge should play as large a role in the comprehension process as does variation in the properties of the graph itself” (p 97). For example, the individual difference in the background knowledge related to graph description made some test takers use their skills to compute the difference between percentages. Moreover, these differences in graphic knowledge made other test takers provide additional reasoning for the identified relations or categorize them through use of discreet values (e.g., higher, lower, equal, better, more, and less).

In addition to graph familiarity, the effect of prior knowledge on graph viewer’s biases plays a main role. According to Freedman and Shah (2002), when the graphic information is clearly indicated in the text, the graph reader is not supposed to make inferences to form a coherent representation of the text. However, if graphic information is not obviously shown in the graph, graph readers have to make inferences to explain the relationships among the elements of graph. This feature matches with the inferential process upon which the test takers drew to make reason for the underlying relations in the graph in the present study. Furthermore, some of the test takers relied on making inferences to comprehend the instruction’s ambiguous phrase (e.g., second class degree). On the other hand, when they do not have any expectations, they explain the graph based on minima and maxima which are in line with the present study. Most of the test taker attempted to categorize elements of bar graph into different groups (e.g., maximum, minimum, and equal) and make a comparison and contrast among them. According to Shah and Carpenter (1995) when the information included in the graph is not explicitly demonstrated, graph readers have to transform that information mentally to a form which can enable them to make inferences about those facts or relations. Therefore, in consonance with shah and carpenter (1995), some test takers were involved in multiple subsets of comparing and categorizing to alter the form of information presented in the instruction or in the graph to different classifications and juxtapose them to make inferencing or reasoning. Other test takers who did not possess the relevant domain of knowledge recognized percentages of each bar specifically through drawing line from x-axis to y-axis or guessing to cluster them separately for each group to provide surface-level description (e.g., describing each bar individually in bar graph).

Moreover, the process of examining graph comprehension is consistent with the process of comprehending graphic information presented by Yu et al. (2011). The test takers were involved in different recursive process including interpreting graphic referents, then moved on to associate them with pattern underlying the graph and using prior knowledge, identifying significant pattern, and additional reasoning. In the case of personal interpretation and reasoning, this process is consistent with the process of personal interpretation in graph-based writing task omitted. They constantly examined the graphic features against domain of knowledge about the content of graph (e.g., the number of students gaining second class degree) and about the instruction’s introductory sentence. Both the content of graph and direction’s sentence were not clear enough for test takers to describe the graph. Therefore, they tried to interact constantly between analyzing graphic information and checking non-graphic information from which they extracted the meaning of conceptual sentence (e.g., the number of students gaining second class degree).

In consonance with the process of “re-presenting graphic and non-graphic information into continuous writing discourse” identified by (Yu et al. 2011), we explored that test takers reformulated their graph description into EFL written discourse while and after examining their graphic information. In both process, test takers tried to plan and organize their responses (e.g., selecting the appropriate and available element from language knowledge such as choice of words), examining the accuracy of content in terms of linguistic forms (e.g., grammar, punctuation, and spelling), self-monitoring and evaluating their writing performances through rereading and counting number of words. Moreover, in both process, students constantly re-examined their comprehension of graphic information, returned to their texts and revised the features that they identified and wrote. However, the GPT grounded model demonstrated additional process including rereading to identify new features and rewriting several drafts to convey their meaning in the best way. Another process, rewriting enabled them to edit their texts to make it linguistically correct and appropriate. Moreover, they did several revision through drafting several times to modify the written discourse and improve its quality in an appropriate way for the intended readers as Hayes (2004) stated that “in many cases, we revise not because we discover a fault but we discover something better to say or find a better way to say what we have said” (p. 11).

The last feature of the grounded model is its recuresiveness which is in consistent with the work of (Pinker 1990; Yu et al. 2011) who indicated that test takers interacted and switched among sub-processes and processes to describe the graph. Likewise, Kennedy (1974) argued “sometimes we read a label or caption before looking at the picture, but more often, probably, we notice the picture first and recognize the pictured object without any help from the accompanying words” (p 7). The participants shift among different processes; for example, when they took notes of the instruction and the graph, they reread the instruction, and return to the graph to rescan the two axes. In terms of pattern recognition, they checked their notes, classified, and then made multiple comparisons and then reanalyze the graph features to make inferences. They also revised their tasks during their first writing, rewrote their written texts twice or more, and edited as they were writing and reproducing their responses into written discourse.

Conclusion

In the present study, we drew our attention to the common pattern of cognitive processes of GPT through GT approach. Methodologically, these findings, in specific, on differences and similarities between cognitive processes of test takers in GPT highlight the usefulness and importance of applying grounded theory in comparison to quantitative methods in developing theoretical model with recognizing its relation to component of strategic competence. This developed model has three main features. First, the grounded model demonstrated its recursive nature in a way that all the test takers moved backwards and forwards among various stages to execute different mental processes to complete the task as presented in Fig. 4. Second, the findings cast light on the strategic model developed by Bachman and Palmer (1996) as follows: assessment (analyzing instruction, interpreting graphic referent, examining their comprehension); goal setting and planning (integrating graphic referents with graphic functions, through clustering/classifying arithmetic information, identifying significant information, contrasting features, and inferring; and reformulating their written discourse; retrieving graphicacy skill, explanatory reasoning, and different domain of knowledge such as language knowledge and topical knowledge). Moreover, the findings of this study do not validate the concept of strategic competence just including metacognitive strategies proposed by (Bachman and Palmer 1996). Instead, the grounded model goes beyond a set of metacognitive strategies and consists of cognitive strategies (e.g., underlining & translating) as well as metacognitive strategies. Third, the findings illustrated that prior knowledge, personal interpretation and additional reasoning influenced the interpretation of graph data. Finally, the findings illustrated that the processes included in GPT grounded model did not reflect the processes test developers intended to measure. This issue indicated construct irrelevant variance that made GPT difficult for test takers to comprehend and describe it.

Implication of the present study

The information gathered in this study provided valuable insights concerning the cognitive processes test takers were involved in while completing GPT. The grounded model provide a beneficial structure for designing these two tasks, specifically with paying attention to the effect of test method facets and construct validity of this task underlying their processes. It is necessary for test developers to recognize the effect of test method variability that could influence test-taking processes. Then, they need to put more attempt to minimize factors of each test method that can alter kinds of strategies test takers use in response to the task. The grounded model also provide useful information regarding construct validity of this task. Test designers should consider the extent to which the processes involved in GPT reflect the processes they intend to measure. This help them to scrutinize the attributes of each method including instructions and writing prompts of two tasks in order to control the effect of construct irrelevant variance on processes in which test takers are engaged to complete the writing task. This study also provides valuable insights for the realm of language teaching. Teachers need to develop greater understanding of GPT grounded model underlying their processes to modify their approach in teaching them. They also need to instruct students and make them aware of the task requirements of describing, comparing or organizing in GPT. This can help students just focus on information presented as facts on the graph regardless of using personal interpretation or background knowledge in GPT in order to enhance their successful completion of this task.

Suggestions for further research

This study has boundaries and limitations that can be considered for future research. First, the present research study focused mainly on use of think aloud protocol which might have influenced or transformed the way test takers were involved in describing or writing both tasks. Therefore, the use of eye tracker to analyze test takers’ eye movement provide a means to better comprehend how they are engaged in different cognitive processes in GPT. Second, the comparison of both tasks was mainly made with reference to identified processes of each task in regardless of test scores. Future studies can integrate the test-taking processes with test scores to gain a broader perspective on cognitive processes which can give researchers a more in-depth understanding of test taking processes involved in writing tasks. Third, further research is needed to investigate mental processes of different types of graph prompt to provide evidence for validation strategic competence model presented by (Bachman and Palmer 1996).

References

  • Anderson, J. R., & Bower, G. H. (1973). Human associative memory. Psychology press.

  • Bachman, L. F. (1990). Fundemental consideration in language testing.

    Google Scholar 

  • Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: designing and developing useful language tests. USA: Oxford University Press.

    Google Scholar 

  • Bachman (2004). Statistical analyses for language assessment.

  • Canale, M. (1983). From communicative competence to communicative language pedagogy. Language and Communication, 1, 1–47.

    Google Scholar 

  • Canale, M., & Swain, M. (1980). Theoretical bases of com-municative approaches to second language teaching and testing. Applied Linguistics, 1(1), 1–47.

    Article  Google Scholar 

  • Candlin, C. (1986). Explaining communicative competence limits of testability. Paper presented at the Toward communicative competence testing: Proceedings of the second TOEFL invitational conference.

    Google Scholar 

  • Carpenter, P. A., & Shah, P. (1998). A model of the perceptual and conceptual processes in graph comprehension. Journal of Experimental Psychology: Applied, 4(2), 75.

    Google Scholar 

  • Charmaz, K., & Smith, JA. (2003). A practical guide to research: Grounded Theory.

  • Charmaz, K. (2006a). Constructing grounded theory: a practical guide through qualitative research. London: Sage Publications Ltd.

    Google Scholar 

  • Charmaz, K. (2006b). Constructing grounded theory: a practical guide through qualitative research. London: Sage.

    Google Scholar 

  • Charmaz (2008a): Constructionism and the grounded theory method.

  • Charmaz (2008b): Grounded theory as an emergent method.

  • Charmaz, K. (2014). Constructing grounded theory. Sage.

  • Creswell, C., & Chalder, T. (2002) Underlying self-esteem in chronic fatigue syndrome. Journal of Psychosomatic Research 53(3), 755–761.

  • Flavell. (1987). Speculations about the nature and development of metacognition. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation and understanding (pp. 21–29). Hillside: Lawrence Erlbaum Associates [Links].

    Google Scholar 

  • Freedman, E. G., & Shah, P. (2002). Toward a model of knowledge-based graph comprehension. In Diagrammatic representation and inference (pp. 18–30). Springer.

  • Glaser, B., & Strauss, A. (1967). The discovery of grounded theory. London: Weidenfeld and Nicholson.

    Google Scholar 

  • Hayes, R. (2004) Book Review: Johnstone, M.-J. (2004) Effective Writing For Health Professionals: A Practical Guide To Getting Published. Australian Health Review 27(1), 134.

  • Jones, S. R., Torres, V., & Arminio, J. (2006). Negotiating the complexities of qualitative research in higher education: fundamental elements and issues. Routledge.

  • Kennedy 1974. Reading Level Determination for Selected Texts.

  • Kintsch, W. (1988). The role of knowledge in discourse comprehension: a construction-integration model. Psychological Review, 95(2), 163.

    Article  Google Scholar 

  • Lewins, A., & Silver, C. (2007). Using software in qualitative research: a step-by-step guide. Sage.

  • Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry (Vol. 75). Sage.

  • O’Loughlin, K., & Wigglesworth, G. (2003). Task design in IELTS academic writing task 1: the effect of quantity and manner of presentation of information on candidate writing. IELTS Research Reports, 4, 89–129.

    Google Scholar 

  • Phakiti, A. (2003) A Closer Look at Gender and Strategy Use in L2 Reading. Language Learning 53(4), 649–702.

    Article  Google Scholar 

  • Phakiti, A. (2008a). Construct validation of Bachman and Palmer’s (1996) strategic competence model over time in EFL reading tests. Language Testing, 25(2), 237–272.

    Article  Google Scholar 

  • Phakiti, A. (2008b). Strategic competence as a fourth-order factor model: a structural equation modeling approach. Language Assessment Quarterly, 5(1), 20–42.

    Article  Google Scholar 

  • Pinker, S. (1990). A theory of graph comprehension (Artificial intelligence and the future of testing, pp. 73–126).

    Google Scholar 

  • Purpura. (1997). An analysis of the relationships between test takers’ cognitive and metacognitive strategy use and second language test performance. Language Learning, 47(2), 289–325.

    Article  Google Scholar 

  • Purpura. (1999). Learner strategy use and performance on language tests: a structural equation modeling approach (Vol. 8). Cambridge University Press.

  • Saldana, J. (2012). The coding manual for qualitative researchers. English short title catalogue Eighteenth Century collection. SAGE Publications.

  • Shah, P., & Carpenter, P. A. (1995). Conceptual limitations in comprehending line graphs. Journal of Experimental Psychology: General, 124(1), 43.

    Article  Google Scholar 

  • Shah, P., Mayer, R. E., & Hegarty, M. (1999). Graphs as aids to knowledge construction: Signaling techniques for guiding the process of graph comprehension. Journal of Educational Psychology, 91(4), 690.

    Article  Google Scholar 

  • Skeat, J., & Perry, A., (2008) Grounded theory as a method for research in speech and language therapy. International Journal of Language & Communication Disorders 43 (2), 95–109.

    Article  Google Scholar 

  • Spiggle, S. (1994). Analysis and interpretation of qualitative data in consumer research. Journal of Consumer Research, 21, 491–503.

    Article  Google Scholar 

  • Strauss, & Corbin, J. (1998). Basics of qualitative research: procedures and techniques for developing grounded theory. Thousand Oaks: Sage.

    Google Scholar 

  • Yang. (2012). Modeling the relationships between test-taking strategies and test performance on a graph-writing task: implications for EAP. English for Specific Purposes, 31(3), 174–187.

    Article  Google Scholar 

  • Yu, G., Rea-Dickins, P., & Kiely, R. (2011). IELTS research reports: The cognitive processes of taking IELTS academic writing task one.

Download references

Authors’ contributions

Both authors read and approved the final manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fereshteh Tadayon.

Additional file

Additional file 1: Appendix A.

GPT tasks in practice stage with thinking aloud. Appendix B. GPT tasks in pilot study and warm-up session. Appendix C. Think aloud training document. Appendix D. GPT task in real session with thinking aloud. (DOCX 1214 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tadayon, F., Ravand, H. Using grounded theory to validate Bachman and Palmer’s (1996) strategic competence in EFL graph-writing. Lang Test Asia 6, 8 (2016). https://doi.org/10.1186/s40468-016-0031-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40468-016-0031-y

Keywords