- Open Access
Development and validation of a metamemory maturity questionnaire in the context of English as a foreign language
Language Testing in Asia volume 11, Article number: 24 (2021)
To determine the inherent components of language learners’ capacity for metamemory maturity, the researchers drafted a metamemory maturity (MMM) questionnaire based on Hultsch et al.’s (Memory self-knowledge and self-efficacy in the aged, Springer-Verlag 65–92, 1988) model. The volunteer participants were a heterogeneous sample of 356 male and female English as a Foreign Language (EFL) teachers and student teachers with various age ranges, teaching experiences, and educational backgrounds. Through a series of factor analytic procedures and structural equation modeling, the final draft of the questionnaire with 30 binary Likert-scale items was validated. Statistics confirmed acceptable measures of internal consistency as well as convergent and discriminant validity. The newly designed MMM questionnaire consisted of three main components of memory strategy use (12 items), memory attentiveness (6 items), memory factual awareness (6 items), and a moderator component of confidence and affect (6 items). The researchers highlight the implications of this questionnaire to provide the teachers with an instrument to analyze the needs of EFL learners for metamemory enhancement strategies.
A great deal of any learning process concerns the recollection of to-be-recalled pieces of information (Elimam & Chilton, 2018; Logan et al., 2012). Thus, learning a second/foreign language is drastically dependent on how efficiently language learners’ memory system is used and manipulated (Durand López, 2021; Rakow et al., 2010; Rankin, 2017). Memory capacity on its own and without enhancement strategies will become static or even deteriorated (Dunning & Holmes, 2014; Gathercole et al., 2019; Strobach & Schubert, 2016). In other words, memory operations tend to be limited and inefficient if the memory system remains untrained. Hence, reasonable manipulation of memory systems would cease the loss (Dunlosky & Metcalfe, 2009), because memory enhancement strategies pave the way for influencing the neurology of the brain and boosting the retrieval of stored knowledge (Baddeley et al., 2015; Li et al., 2019; Salmi et al., 2018). Therefore, language learners tend to pursue effective ways to uplift the productive capacity of memory by closely observing, re-evaluating, and adopting necessary regulations of memory functionality (Dunlosky & Thiede, 2013).
From epistemological and psychological perspectives, the Greek prefix “meta” is usually attached to a word to denote a discussion about that concept or process (Hertzog & Curley, 2018). In the same vein, Martinez (2006) defined metacognition as over-thinking, monitoring, and controlling one’s thoughts. To elaborate the breadth of metacognitive functioning, Martinez proposed a taxonomy of three main categories for metacognition, namely, metamemory, meta-comprehension, and problem-solving and logical thinking. Metamemory is roughly known as enhancing self-awareness, regulating one’s own memory processes, constructing knowledge, generating awareness, and self-monitoring of memory functions (Dunlosky & Thiede, 2013; Hertzog & Curley, 2018). Therefore, metamemory is just one subcomponent of metacognition, which influences higher-order thinking, and learning, in a variety of ways, especially in terms of making effective use of limited cognitive resources, using strategies, and tracking comprehension (Bjork & Bjork, 2011; Dunlosky & Thiede, 2013; Stone, 2000).
Metamemory in theory and practice has recently been scrutinized in a number of research studies (e.g., Blake et al., 2015; Cottini et al., 2018; Dunlosky & Bjork, 2008; Dunlosky & Metcalfe, 2009; Dunlosky & Thiede, 2013; Einstein & Mcdaniel, 2007; Maki et al., 2009). Metamemory has been studied from different standpoints such as intrinsic and extrinsic metamemory typology (Susser & Mulligan, 2019), the effect of expectancy illusion on metamemory (Schaper & Bayen, 2021), the effect of mnemonic devices on metamemory (Mieth et al., 2021), different strategy choices for enhancing metamemory (Park et al., 2018), and its relationship with the cognitive offloading (Hu et al., 2019). However, experimental studies have mainly focused on the relevant courses of action in clinical psychology and ordinary life habits to measure the clients’ metamemory construct in the course of tasks and activities (Dixon & Hultsch, 1983; Tonković & Vranić, 2011; Troyer et al., 2019; Van Ede & Coetzee, 1996).
In a number of studies, SLA researchers have recently shown their interest in encouraging language learners’ manipulation of metacognitive mechanisms (Bui & Kong, 2019; Cer, 2019; Han, 2020; Zhang & Zhang, 2019). However, a marginal number of works have been directed particularly to the substantial role of metamemory maturity in the acquisition process of a second/foreign (L2) language. By definition, maturity entails language learners’ constant evolvement of responsibility, knowledge, reflectiveness, self-esteem, autonomy, and cognizance in a certain task (Dermanova & Manukyan, 2010). Undoubtedly, prior to any metamemory manipulation, an observation of L2 learners’ metamemory maturity is required. Thus, developing an instrument to measure the gradual improvement in the status quo of metamemory seems necessary, since the existing questionnaires have no valid applications to language learners in L2 contexts.
Some well-known metamemory questionnaires are the Metamemory in Adulthood (MIA) (Dixon & Hultsch, 1983); the Metamemory, Memory Strategy, and Study Technique Inventory (MMSSTI) (Van Ede & Coetzee, 1996); the Everyday Memory Questionnaire (EMQ) (Royle & Lincoln, 2008); and the Self-Evaluation of Memory Systems Questionnaire (SMSQ) (Tonković & Vranić, 2011). The questionnaires have commonly adopted a neurocognitive approach to the nature of metamemory, which are mostly rooted in how clinical patients verbalize and picture their own memory processes. Despite reportedly satisfactory psychometric properties, they are disputed for such critical shortcomings as a large number of items as a cause for boredom and distraction to the respondents, disarrangement of the items, unexamined convergent and discriminant validity measures, or low generalizability. Widely used in psychotherapeutic contexts, these questionnaires can hardly account for the dynamic psychological attributes in language learners. Hence, the absence of suitable instruments for measuring metamemory maturity in other research domains such as L2 language teaching/learning seems inevitable. To fill the void, the researchers in the present study conceptualized the components of metamemory to measure its maturity, with a specific focus on the EFL learners and student teachers. The drafted and validated questionnaire in this study was labeled as the metamemory maturity (MMM) questionnaire.
The incentive behind developing the MMM questionnaire was twofold. First, due to the dynamic nature of second/foreign language learning and teaching context in which the various interacting factors have serious impacts on both quality and quantity of language learning, the acting variables are seemingly different from those in therapeutic and clinical settings. Therefore, the sampling errors caused by such variations would lead to fluctuating patterns of data in non-clinical educational (i.e., EFL) contexts which eventually deteriorate the reliability of the results if the available questionnaires are applied (Best & Kahn, 2006). In order to reduce the margin of errors, the theoretical framework of the MMM questionnaire was specifically grounded on the target population of EFL teachers and EFL student teachers. Secondly, the MMM questionnaire was constructed on solid theoretical and statistical grounds to compensate for the shortcomings in other questionnaires, such as large number of items, small sample size, and subsequent low generalizability index, as well as fallacies in discriminating components of metamemory construct. The items in the MMM questionnaire are based on the well-known model of metamemory introduced by Hultsch et al. (1988) with four main components of memory factual knowledge, memory monitoring, memory self-efficacy, and memory-related affect. In their original and comprehensive model of metamemory, Hultsch et al. elaborated on these components in detail.
The first component of metamemory in Hultsch et al. (1988), memory factual knowledge, is defined as someone’s knowledge of what memory is and what pertinent tasks and strategies can be used for better results in a memory-demanding situation (Dunlosky & Thiede, 2013; Dunning & Holmes, 2014; Gathercole et al., 2019; Strobach & Schubert, 2016). Memory factual knowledge encompasses a wide range of principal and practical undertakings (Hultsch et al., 1988). Some language learners appear incognizant of how their memory works and where the plans for storing, processing, and retrieving language input are grounded (Robinson, 2017; Spanoudis & Demetriou, 2020). Incorporating their memory factual knowledge, researchers maintain vigilance to employ memory enhancement strategies (Kazi et al., 2019). General knowledge of diets, hydration, bedtime, and sleep deprivation effects, as well as sufficient knowledge of how memory operates, is one of the instances of memory factual knowledge in metamemory (Cousins & Fernández, 2019; Peng et al., 2020; Tamminen et al., 2020).
The second component of metamemory, memory monitoring, refers to someone’s close observation of self-memory use in memory-demanding tasks (Hultsch et al., 1988). In memory monitoring, the process of applying memory factual knowledge to memory tasks is keenly followed (Huff & Bodner, 2013). Time allocation strategies (Ariel et al., 2009; Double & Birney, 2019; Tauber & Rhodes, 2012), spaced practice, re-studying, and scheduling (Carvalho & Goldstone, 2021; Kelley & Whatson, 2013; Logan et al., 2012; Son, 2010), as well as benefiting judgments of learning data (JOLs) (Janes et al., 2018; Myers et al., 2020), are some examples of memory monitoring.
Memory self-efficacy is the third component in Hultsch et al.’s (1988) model, which probes the extent the learners feel content about their own memory capacity and memory functionality. Aging, which is typically tinged with declines in memory potentiality, and lack of daily brain activity, which causes stagnancy of the memory systems, are reported as contributing factors to low satisfaction over memory efficacy (Bubbico et al., 2019; Li et al., 2019; Pfenninger & Singleton, 2019). Health issues also contribute to growing memory loss and subsequent dissatisfaction (Mandolesi et al., 2018). On the other hand, some factors such as education, effortful strategy use (Laine et al., 2018; Peng & Fuchs, 2017), and confidence-raising workout are popular remedies for low memory self-efficacy (Auslander et al., 2017; Boldt & Gilbert, 2019).
Finally, memory-related affect, as the fourth component, embeds the emotional factors playing roles in memory-demanding activities (Hultsch et al., 1988). Emotions by nature can either facilitate memory functions or cause cognitive impairments. Language learners with high anxiety level, for instance, are more vulnerable to memory loss (Riegel et al., 2015), or depressive EFL learners are reported as incompetent in those memory adaptive behaviors that result in effective language learning uptake (Staniloiu & Markowitsch, 2019).
As stated earlier, the term maturity encompasses a steady development in responsibility, knowledge, reflectiveness, self-esteem, autonomy, and cognizance in a certain task. In essence, a growing maturity in metamemory and metacognitive pursuits seems to be essential to the performance of those who are actively involved in the context of language learning and even teaching (Dunlosky & Thiede, 2013). Metamemory maturity is likely to improve the functionality, self-satisfaction, and awareness of language learners after a period of training (Baddeley et al., 2015). Therefore, it seemed critical to develop a valid and reliable instrument for identifying and measuring the components of metamemory maturity in the L2 context and to statistically confirm the soundness of its underlying components.
Reportedly, impairments in different stages of memory are bound to learning failure in general (Chein & Morrison, 2010; Daneman & Hannon, 2007; Oberauer et al., 2008) and language learning in particular (Carroll, 2004). In practice, countless language learners are blamed for their inefficient memory in retaining new words or language structures (Baddeley, 2003), and such infirmity can be easily conquered by training them to gain maturity in monitoring and manipulating memory use (Baddeley et al., 2015). Such training needs an instrument to obtain a vivid picture of the language learners’ memory status quo. However, the SLA community lacks a sound and comprehensive scale. To fill the gap, the researchers attempted to develop and validate a metamemory maturity questionnaire to address the target population of EFL language learners and teachers. The following research questions were raised and explored in this study:
RQ1: What are the psychometric properties of the metamemory maturity (MMM) questionnaire in an EFL context?
RQ2: What are the underlying components of the metamemory maturity (MMM) questionnaire?
RQ3: To what extent does the structural model of metamemory (MMM) questionnaire fit the hypothetical model generated by relevant literature review?
A total number of 356 participants voluntarily took part in the present study. They were selected through a snowball non-random sampling procedure (Heckathorn, 2002) from a pool of experienced EFL teachers and EFL student teachers at three private language institutes as well as student teachers in three universities in Iran. Table 1 summarizes the demographic information of the participants in this study.
Since determining the sample size in this study was a major issue for running the statistical tests of exploratory and confirmatory factor analyses (EFA and CFA) as well as structural equation modeling (SEM), a widely noted approach to sample size estimation by Kline (2011) was adopted. Kline argued that to determine the optimal number of respondents to a questionnaire in the piloting phase, a sample size of 30 to 460 is required when the number of components in the conceptual model is three to eight. Because the MMM questionnaire was developed based on the four components of (1) memory factual knowledge, (2) memory monitoring, (3) self-efficacy, and (4) memory-related affect in Hultsch et al.’s (1988) model of metamemory, a minimum sample size of 360 participants was required.
Determining an appropriate sample size is a critical issue in SEM studies, but, unfortunately, an exact, agreed-upon consensus does not exist in the literature (see Wang & Wang, 2020). There is no absolute standard concerning an adequate sample size and no rule of thumb that applies to all SEM contexts (see Wang & Rhemtulla, 2021 for an update). The determination of sample size, as Wang and Wang (2020) neatly summarized, depends on a large number of factors, including the number of free parameters and the number of indicators per latent variable, data characteristics, and the model being tested, such as reliability of the observed indicators, study design (e.g., cross-sectional versus longitudinal), degree of data multivariate normality, handling of missing data, model complexity, and the model estimators. Given the multiplicity of factors in the determination of the SEM sample sizes, researchers conducting SEM studies resort to rules of thumb recommending either absolute minimum sample sizes (e.g., n = 100 or 200; Boomsma, 1985) or sample sizes based on model complexity (e.g., n = 5–10 per estimated parameter, according to Bentler & Chou, 1987; n = 3–6 per variable, according to Lee and Song, 2004). However, as Wang and Rhemtulla (2021) remind us, these rules of thumb “do not always agree with each other, have little empirical support … and generalize to only a small range of model types” (p. 1). By implication, the recommendations in the literature concerning SEM sample sizes either reflect theoretical orientations or are based on a very small number of empirical research studies.
Following model complexity to determine sample size in their SEM studies, researchers usually use the ratio of participates/cases to items/variables. Using even this approach, researchers appear to be divided over the minimum number of participants for a SEM analysis, so, while some researchers (e.g., Kline, 2016) consider five cases per variable to be the minimum sample size for a SEM study, some others like Lee and Song (2004) recommend the minimum number of three cases per variable for a SEM study. However, when the complexity model is used, researchers usually follow Kline’s recommendation for the minimum number of participants, as one of the anonymous reviewers has also pointed out. Therefore, in our study, following Lee and Song’s (2004) recommendation, we needed at least 216 participants to begin our study with, but according to Kline’s suggestion, a sample size of at least 360 participants was needed. The participants in the present study included 356 language teachers, which means four other language teachers had to complete the questionnaire. Although the absence of these very few participants may not generally affect the findings (mainly due to the robustness of the SEM test), we consider it to be a limitation of our study.
For each of the components of Hultsch et al.’s (1988) model of metamemory (i.e., memory factual knowledge, memory monitoring, self-efficacy, and memory-related affect), a comprehensive review of the literature was conducted, the results of which were incorporated into a number of themes and operational definitions. These statements were later used to draft a total number of 80 Likert-scale items with twenty items allocated to each of Hultsch et al.’s (1988) metamemory components. In Table 2, the hypothetical components of the MMM questionnaire and their encoded themes are presented with some selected items in its first draft.
Tables 3, 4, 5, and 6 show the four example items in the questionnaire with their theoretical background and reference entries. Each item represents one component in the final draft of the questionnaire (i.e., memory factual knowledge, memory monitoring, memory self-efficacy, and memory-related affect). In order to avoid acquiescence bias (Dornyei & Taguchi, 2010), a binary Likert scale was implemented in order to safeguard the participants against ambivalence in responding to the items.
The initial draft of the questionnaire was reviewed by five experts, including two professors of applied linguistics and three experienced EFL teachers for the first round of content validity and theoretical saturation. Eight items were found vague and inappropriate, and thus, they were excluded. The final draft of the questionnaire included 20 items for the component memory factual knowledge (items 1 to 20), 18 items for the component memory monitoring (items 21 to 38), 20 items for the component memory-related affect (items 39 to 59), and 15 items for the component memory self-efficacy (items 60 to 75). With 72 finalized items from the expert opinion validation process, the second (final) draft of the questionnaire (with 72 items) was administered to 356 student teachers selected from three universities and three private language schools over the course of 2 weeks using the snowball method of sampling. All 356 participants responded to all items of the questionnaire. Due to the COVID-19 pandemic, the questionnaire was constructed in the online Google Forms platform and distributed to the participants through their emails or personal IDs on social media. The collected set of data were subjected to exploratory and confirmatory factor analysis (EFA and CFA) to determine the construct validity of the questionnaire (Osborne et al., 2008). In addition, the structural equation modeling (SEM) was conducted in order to define the path orientation of the underlying components in the multifaceted metamemory maturity construct and their factor loadings.
Results for 72 items of the questionnaire
Prior to statistical analysis, the researchers measured the reliability of the data (Cronbach’s α = 0.865) which was interpreted as a high internal consistency index for 72 items of the questionnaire. The reliability statistics eradicated a notable change if any of the items were removed from the set after probing item total statistics. Therefore, all the items were sustained to undergo factor extraction analysis.
The application of exploratory factor analysis (EFA)
Although the newly designed MMM questionnaire was drafted to map Hultsch et al.’s (1988) model, an EFA was conducted in order to avoid any bias towards setting up a metamemory maturity construct. The analysis was run on an Oblimin rotation of the collected responses from all 356 participants. The sampling adequacy was examined by Kaiser-Meyer-Olkin (KMO) (Kaiser, 1970). The threshold value of KMO is expected to score over 0.60. The KMO statistics for the data displayed a KMO value equal to 0.69; therefore, the assumption of sampling adequacy was met. Likewise, the chi-square p-value for Bartlett’s test of sphericity showed a significant difference (p = 0.00, < 0.05) between the matrix in the data set and the identity matrix.
As setting a strong set of data is often recommended in the literature for conducting EFA, the commonality values are always critical. The communality cutoff value is reported to be above 0.30 (Field, 2013). In the collected data, the main body of communalities in the output ranged from 0.60 to 0.73, with a few exceptions around 0.52. The items with moderate communality values (n = 4) were excluded from the data in the following statistical analysis in order to maintain the maximum strength in the data.
Factor extraction and retention
After a parallel analysis (PA), the explored eigenvalues were compared to a set of uncorrelated eigenvalues produced by the Monte Carlo algorithm (Horn, 1965). Accordingly, all the observed eigenvalues in the EFA matrix surpassed the uncorrelated eigenvalues in the Monte Carlo algorithm, which warranted the appropriacy and acceptability of the observed eigenvalues (see Table 7).
Twenty-six components were detected with eigenvalues above 1 (Kaiser’s criterion component, 1960) which outnumbered the components in the Hultsch et al. (1988) model, in the factor retention process. However, 22 factors with slight variance differences were excluded prior to further statistical analysis. Illustrated in the scree plot (see Fig. 1), four components stood out in the analysis output. Comparatively, all four factors above the elbow benefited the eigenvalues above 2 with the highest eigenvalue scored as 7.55. The four extracted factors contributed to 21% of the whole variance. Because this contribution was unexpectedly low, we decided to remove items with low factor loadings to optimize the quality of the questionnaire.
In order to detect problematic items, the component matrix was investigated to identify the items that contributed to variations within each component. A few items with cross-loadings were examined, and the items with cross-loadings below 0.20 (Sosik et al., 2009) were removed from the set (n = 7). After the second round of content analysis by two professors of applied linguistics, the theoretical framework for the MMM questionnaire was determined by running a structural equation modeling (SEM) in IBM SPSS AMOS 26 (see Fig. 2).
Construction of the first structural model with 43 remaining items
The application of confirmatory factor analysis (CFA)
After eliminating the items with standardized estimates of regression weight below 0.25 (n = 18) (Kwan & Chan, 2011), the initial model with four major components included the remaining 43 items in the MMM questionnaire. The surface structure of the model was designed to correspond to the four components in Hultsch et al.’s (1988) model of metamemory. However, for both theoretical and statistical reasons, a confirmatory factor analysis (CFA) was conducted in order to insure the credibility of the model fit.
Results of the first structural model’s goodness of fit
The threshold values of RMSEA, GFI, IFI, and TLI were compared to the values in CFA. The measures of chi-square and RMSEA showed significant values (χ2(405) = 2.123, p = 0.00). However, the goodness of fit measures of GFI, IFI, and TLI reported the values of 0.85, 0.81, and 0.80, respectively.
The optimal indices for the goodness of fit have been suggested by several researchers (Browne & Cudeck, 1993; Cho et al., 2020; Kline, 2011). While Browne and Cudeck (1993) recommended the acceptable range of above 0.80 for the goodness of fit (GFI), Cho et al. (2020) and Kline (2011) agreed on GFI greater than 0.90. Hence, the measures of goodness of fit in this study were interpreted as mediocre. In order to increase the credibility of the developed structural model, we excluded some more statistically unfitting items, using their factor loading.
Despite achieving a mediocre GFI for the first developed structural model (Fig. 2), an attempt was made to reconstruct the model. The rationale was to detect more suitable underlying components and path algorithms of metamemory maturity and to plot a model with higher goodness of fit. Further modifications were carried out with a number of items and components by probing through the statistical fits and misfits, so that the second model with different correlational paths and underlying factors was constructed. A notable improvement in the second constructed model was done in terms of re-evaluating the nature of components in the first constructed model, which increased the likelihood of the fourth extracted component to serve as a moderator. Thirteen more items were removed in the final phase of the SEM analysis, which turned the number of items into 30 (see Fig. 3).
Construction of the second structural model with 30 items
Results of the second (finalized) structural model’s goodness of fit
To re-calculate the goodness of fit for the final model of the MMM questionnaire, a reference was made to Hair Jr. et al. (2010). According to their guideline for determining the acceptable factor loadings, for a stereotypical sample size of 350 participants and above, an acceptable factor loading should be set over 0.3 (Hair Jr. et al., 2010). In the final model of the MMM questionnaire, the standardized estimate loadings on every item in the main and moderator components were 0.42 to 0.57, respectively, which were relatively high and acceptable.
The model fit values were calculated by running a chi-square test. The values less than 5 are interpreted as moderate but still acceptable. The values less than 3 are reported as a strong fit. Therefore, the chi-square value (χ2(407) = 1.434, p = 0.00) was interpreted as a desirable fit. RMSEA was reported as 0.035 < 0.05. Other indices of the goodness of fit were also greater than the critical value of 0.90 (IFI = 0.921, GFI = 0.909, CFI = 0.919, and TLI = 0.907). In this round of analysis, therefore, the researchers managed to reach an acceptable measure of goodness of fit (GFI) above 0.9 (see Table 9 in Appendix for the questionnaire items and factor loadings).
In addition to factor analysis, a path analysis was conducted to detect the significance of the links across the components and the construct of metamemory maturity in structural equation modeling (SEM). Both direct and indirect paths between the main components, the moderator, and the construct are demonstrated by arrows in Fig. 3. The direct paths among the main components and the construct were labeled as c1, c2, and c3. The indirect paths were shown through the arrows between the main components and the moderator (a1, a2, and a3) as well as the moderator and the construct (b). In the direct and indirect path models, the unrelated paths were programmed to be excluded from the equation to investigate their effects separately. The path construction of the entire model was in accordance with the relevant literature on path analysis and SEM (Kline, 2011).
According to Hair Jr. et al. (2010), all indices related to the moderator in a structural equation model must be significant at p < 0.05. The path analysis of the model was conducted by probing the path statistical tables. In order to explore the possible differences between the presence and absence of the moderator, three separate models were designed in IBM SPSS AMOS 26 (i.e., direct, indirect, and moderation models). To ensure the components are significantly connected to the metamemory maturity construct, the direct paths between the main components and the construct were inspected at the outset. The direct contribution of all components was warranted by the significant p-values (c1, c2, c3 p-values < 0.05). In the direct model, the path coefficients for components 1, 2, and 3 were reported as β = 0.70, β = 0.17, and β = 0.44, respectively.
After the direct path values to the metamemory maturity construct were confirmed, the possibility of indirect path relations was intensified, and the indirect paths were examined. Among the three path values of a1, a2, and a3, only component 1 showed a significant p-value (a1 p-value = 0.003 < 0.05) with a path coefficient of β = 0.64. Components 2 and 3 displayed insignificant paths to the moderator (a2, β = 0.15, p-value = 0.080 > 0.05, a3 β = 0.41, p-value = 0.114 > 0.05). Therefore, it was concluded that the moderator only modified the variations in component 1 when it contributed to the metamemory maturity construct. The standardized estimates of the covariance coefficients among the main components were calculated in the next step. The estimates ranged from weak (σ = 0.19) between components 2 and 3, relatively moderate (σ = 0.37) between components 1 and 2 to a moderate covariance (σ = 0.46) between components 1 and 3. The covariance values were evidence of a slight interaction among the components.
The significant interactions in the direct and indirect models set the opportunity for generating the moderation model, which was examined in the final step. In the moderation model, the path coefficients were reported as β = 0.70, β = 0.17, and β = 0.44 for the paths between components 1, 2, and 3 and metamemory maturity, respectively. All three paths were reported to have significant p-values (c′1, c′, c′3 < 0.05). Mathieu and Taylor (2006) provided a framework for decisions made on the moderation effect. They suggested that if path c (i.e., direct path from a component to the construct) is reported as significant, the moderation effects should be examined. Then, if the path between the component and the moderator (path a) and the path between the moderator and the construct (path b) are significant, a partial moderation is reported. Eventually, if any of the paths a or b turns insignificant, only the chance of a direct relationship between the components and the construct should be considered. In the final model of the MMM questionnaire, the a1, b, and c′ turned out as acceptable paths; thus, a partial moderation for component 1 was determined. For components 2 and 3, no significant paths to the moderator were decided. Instead, both showed direct paths to the construct (see Fig. 3).
In addition to the coefficients in the path analysis of the final model, the factor loadings of the items that contributed to the main and moderator components were investigated. For the first component, the factor loadings of the 12 items ranged from 0.39 to 0.53. The six items in the second component benefited the factor loadings ranging from 0.44 to 0.57. The third component consisted of six items with factor loadings of 0.38 to 0.52. Finally, the moderator component with six items gained relatively lower factor loadings ranging from 0.18 to 0.32.
Validity and composite reliability (CR)
In order to detect the composite reliability (CR) for separate components in the metamemory maturity construct, the standardized regression weights and the correlation values were calculated. As Hair Jr. et al. (2010) noted, the acceptable cutoff point for CR is 0.60 and above. The CR values for components 1, 2, and 3 were all larger than 0.60 (0.798, 0.638, 0.601, respectively). Moreover, to measure the discriminant validity, the researchers examined the average variance extracted (AVE). In a large sample size, the estimation usually results in lower AVE values due to the indicator item loading sensitivity (Hui & Wold, 1982; Lohmöller, 1989). Therefore, the significance of discriminant validity was determined with reference to acceptable measures of CR (above 0.60) obtained in this study. Maximum shared variance (MSV) values were obtained to measure the convergent validity. Except for a subtle violation in component 3, components 1 and 2 benefited the acceptable convergent validity due to a smaller MSV than AVE. In Table 8, the values for the CR, AVE, and MSV are summarized.
To sum up, the finalized model of metamemory maturity consists of three main components and a moderator explaining the variations in language learners’ metamemory maturity (see Fig. 3). Component 1, which contained 12 items, was labeled as memory strategy use (MSU) as it explores language learners’ active use of memory strategies to memorize items, sort out the items, and manage time in memorization. Component 2 was labeled as memory attentiveness (MAt) with six items, which examines how language learners manipulate their attention span to build up strong memories or undivided attention to complex language items. Component 3 was named as memory factual awareness (MFA) with six items, which probes into language learners’ overall knowledge of what memory is, how it functions, and how it can be enhanced. The moderator was labeled as confidence and affect with six items, which questions the language learners’ level of consciousness, self-control, and engagement.
The present study laid its statistical groundwork to validate the newly developed metamemory maturity (MMM) questionnaire. The first component in the MMM questionnaire, memory strategy use (MSU) includes the largest number of items (n = 12). In the literature, ample evidence supports the effective mediation of memory strategy use to the language learners’ memory functionality (Laine et al., 2018; Peng & Fuchs, 2017). Although there are arguments for and against the effectiveness of memory training (e.g., Dunning & Holmes, 2014; Gathercole et al., 2019), the extensive variations in memory span can be easily accounted within the scope of MSU in the MMM questionnaire. In case of training language learners or teachers about how to use memory enhancement strategies such as spaced learning (Nakata, 2015), rehearsal (McKinley & Benjamin, 2020), mnemonics, acronym and associations (Putnam, 2015), and memory palace (Ralby et al., 2017), they will acquire certain skills to plan, employ, and execute effective learning strategies in a variety of language tasks (Klingberg, 2010). In other words, MSU focuses on the reciprocity between metamemory maturity and progressive learning experience.
Memory attentiveness (MAt) as the second component in the MMM questionnaire with six items addresses the language learners’ attention span in the memory-demanding tasks. Several studies support the positive role of language learners’ attentiveness in retaining to-be-memorized items and their long-term retention (Ellah et al., 2019), extensive learning uptake (Small et al., 2020), and successful encoding information with a higher differentiation level (Kilic et al., 2017). In an experimental study, Kilic et al. (2017) reported that in remembering a large number of selective items with similar content, an increased attention span facilitates the language learners’ encoding pathways and processes. Thus, both MSU and MAt are bound to training and constant enhancement on the part of language learners (Wass et al., 2012; Zalbidea & Sanz, 2020).
Third, in the list of the MMM questionnaire components with six items, memory factual awareness (MFA) targets the language learners’ awareness and knowledge about memory system. The knowledge about the functionality of memory system is multifaceted and broad with numerous topics such as types of memory, mechanisms of encoding input, retention and retrieval, and techniques to maintain the brain’s physical health. Empirical studies support how students’ knowledge of memory functionality can initiate self-regulation in their language learning process (Efklides, 2009). Besides, the acute awareness about the negative impacts of factors such as aging or poor diet on memory decrements encourages learners to adopt healthy lifestyle, brain health exercises, and suitable diets to boost brain and memory functionality (Craik et al., 2010). The significant interaction between MSU and MFA in this study (Fig. 3) can be interpreted as the necessity of instructions to the knowledge of memory system, which assists language learners to adopt more efficient memory strategies.
Finally, the moderator role of confidence and affect with six items was explored in the MMM questionnaire. Statistics supported that confidence and affect would regulate the variations in one of the main components of the MMM questionnaire, MSU. The moderator was generated in the final model of the MMM questionnaire for both statistical and theoretical reasons. Regarding the statistics, after items with strong loadings (n = 30) defined the main components, the remaining items (n = 6) were schematized into a moderating component. Theoretically, confidence and affect were not supported as the moderator component in Hultsch et al.’s (1988) metamemory model; however, “memory-related affect” in their model was an amalgamation of respondents’ emotional and personal attributes. In the MMM questionnaire, language learners’ positive emotions such as self-confidence are assumed to function as a regulator to component 1 (MSU) of metamemory maturity (Margeaux et al., 2017). The role of language learners’ self-confidence in selecting proper memory-related strategy use and spontaneous cognitive offloading is supported in the literature (Auslander et al., 2017; Boldt & Gilbert, 2019).
Despite the structural differences, the MMM questionnaire and the well-known MIA questionnaire (Dixon & Hultsch, 1983) show some similarities in the nature of their components. In MIA, the components of “knowledge of memory processes and tasks” and “cognitive activity” have close theoretical definitions to memory factual awareness (MFA) in the MMM questionnaire, as they all refer to the respondents’ metacognitive awareness. Particularly, in MIA, the component of “frequency of memory strategy use” mirrors that of memory strategy use (MSU) in the MMM questionnaire as they both emphasize the role of acquiring memory strategies. In addition, “perceptions of change in memory capacity over time” in MIA is partially defined as MSU in the MMM questionnaire, both supporting self-monitoring in the respondents. “Locus of control” as another component of MIA also corresponds to memory attentiveness (MAt) in the MMM questionnaire, as both require learners’ ongoing practice of attentiveness. Likewise, the MMM questionnaire and SMSQ questionnaire (Tonković & Vranić, 2011) have some resemblance. Among the six components in SMSQ, “episodic memory, semantic memory, memory for numbers, and visospatial memory” differentiate memory types which are closely related to MFA in the MMM questionnaire as all emphasize the learners’ knowledge of the memory system and functionality. The other two components of “subjective evaluation” and “reminder and aids” in SMSQ can be embedded in MSU in the MMM questionnaire since all address the learners’ active use of memory strategies.
The metamemory Maturity (MMM) questionnaire was developed and validated in order to explore and evaluate the multi-faceted nature of metamemory maturity in performance on memory-demanding tasks in EFL contexts. The researchers’ major argument in this study is that there is no such concept as weak memory, but an untrained memory. This premise was supported statistically, using three analytical techniques of EFA, CFA, and SEM. The developed MMM questionnaire was intended to address the EFL teachers and student teachers in their attempts on memory-demanding tasks such as learning and retaining complex grammatical structures, huge body of new lexical items, or taking turns in an effective verbal communication.
Administering the MMM questionnaire as a placement instrument in educational environments can set an opportunity to analyze and meet the needs of students for receiving instructions to metamemory strategies or engaging in active memory strategy use. In L2 learning and teaching contexts in particular, administering the MMM questionnaire launches a variety of metamemory enhancement strategies by informing teacher trainers about student teachers’ strengths and weaknesses. Using such strategies as verbal and written rehearsals, visual prompts, or mnemonic rhymes, student teachers will intake the required materials for teaching more effectively (Baddeley et al., 2015). This is, in fact, carried out to assist language learners in acquiring such memory-demanding tasks as the sound-letter system of the L2, focusing on language form(s) and expanding the growing body of lexical knowledge.
In terms of the limitations of this study, the following points are in order. First, it should be noted that all the participants were non-native speakers of English whose responses to the questionnaire could be superseded by their sociocultural and first language background (Chun, 2014; Wang & Lin, 2013). In addition, the sample size in the present study did not reach the minimum number recommended in the literature of the SEM studies, so the findings in this study should be interpreted with caution in similar EFL learning contexts. Finally, due to the time limitations and inaccessibility to a larger number of participants at different time intervals, we collected one data set for validation purposes in this study. Ideally, as one of the anonymous reviewers rightly asserted, several rounds of data collection need to be carried out to revise and validate an instrument.
Availability of data and materials
Please contact the authors for data requests.
English as a Foreign Language
Metamemory, Memory Strategy, and Study Technique Inventory
Everyday Memory Questionnaire
Self-Evaluation of Memory Systems Questionnaire
Structural Equation Modeling
Exploratory Factor Analyses
Average Variance Extracted
Ariel, R., Dunlosky, J., & Bailey, H. (2009). Agenda-based regulation of study-time allocation: When agendas override item-based monitoring. Journal of Experimental Psychology, 138(3), 432–447. https://doi.org/10.1037/a0015928.
Auslander, W., McGinnis, H., Tlapek, S., Smith, P., Foster, A., Edmond, T., & Dunn, J. (2017). Adaptation and implementation of a trauma-focused cognitive behavioral intervention for adolescent girls in child welfare. American Journal of Orthopsychiatry, 87(3), 206–215. https://doi.org/10.1037/ort0000233.
Baddeley, A. (2003). Working memory and language: An overview. Journal of Communication Disorders, 36(3), 189–208. https://doi.org/10.1016/S0021-9924(03)00019-4.
Baddeley, A. D., Eysenck, W., & Anderson, M. C. (2015). Memory. Psychology Press. https://doi.org/10.4324/9781315749860.
Bentler, P. M., & Chou, C. P. (1987). Practical issues in structural modelling. Sociological Methods & Research, 16(1), 78–117. https://doi.org/10.1177/0049124187016001004.
Best, J. W., & Kahn, J. V. (2006). Research in education, (10th ed., ). Pearson Education, Inc.
Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society, (pp. 56–64). Worth Publishers.
Blake, A., Nazarian, M., & Castel, A. (2015). The apple of the mind’s eye: Everyday attention, metamemory, and reconstructive memory for the Apple logo. Quarterly Journal of Experimental Psychology, 68(5), 1–8. https://doi.org/10.1080/17470218.2014.1002798.
Boldt, A., & Gilbert, S. J. (2019). Confidence guides spontaneous cognitive offloading. Cognitive Research: Principles and Implications, 4(1), 45. https://doi.org/10.1186/s41235-019-0195-y.
Boomsma, A. (1985). Nonconvergence, improper solutions, and starting values in LISREL maximum likelihood estimation. Psychometrika, 50(2), 229–242. https://doi.org/10.1007/BF02294248.
Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen, & J. S. Long (Eds.), Testing Structural Equation Models, (pp. 136–162). Sage.
Bubbico, G., Chiacchiaretta, P., Parenti, M., Di Marco, M., Panara, V., Sepede, G., … Perrucci, M. (2019). Effects of second language learning on the plastic aging brain: Functional connectivity, cognitive decline, and reorganization. Frontiers in Neuroscience, 15(13), 423–433. https://doi.org/10.3389/fnins.2020.00108.
Bui, G., & Kong, A. (2019). Metacognitive instruction for peer review interaction in L2 writing. Journal of Writing Research, 11(2), 357–392. https://doi.org/10.17239/jowr-2019.11.02.05.
Carroll, D. (2004). Psychology of language. Nelson Education.
Carvalho, P. F., & Goldstone, R. L. (2021). The most efficient sequence of study depends on the type of test. Applied Cognitive Psychology, 35(1), 82–97. https://doi.org/10.1002/acp.3740.
Cer, E. (2019). The instruction of writing strategies: The effect of the metacognitive strategy on the writing skills of pupils in secondary education. SAGE Open, 9(2), 215824401984268. https://doi.org/10.1177/2158244019842681.
Chein, J. M., & Morrison, A. B. (2010). Expanding the mind’s workspace: Training and transfer effects with a complex working memory span task. Psychonomic Bulletin & Review, 17(2), 193–199. https://doi.org/10.3758/PBR.17.2.193.
Cho, G., Hwang, H., Sarstedt, M., & Ringle, C. M. (2020). Cut-off criteria for overall model fit indexes in generalized structured component analysis. Journal of Market Analysis, 8(4), 189–202. https://doi.org/10.1057/s41270-020-00089-1.
Chun, S. Y. (2014). EFL learners’ beliefs about native and non-native English-speaking teachers: Perceived strengths, weaknesses, and preferences. Journal of Multilingual and Multicultural Development, 35(6), 563–579. https://doi.org/10.1080/01434632.2014.889141.
Cottini, M., Basso, D., & Palladino, P. (2018). The role of declarative and procedural metamemory in event-based prospective memory in school-aged children. Journal of Experimental Child Psychology, 166, 17–33. https://doi.org/10.1016/j.jecp.2017.08.002.
Cousins, J., & Fernández, G. (2019). The impact of sleep deprivation on declarative memory. Progress in Brain Research, 246, 27–53. https://doi.org/10.1016/bs.pbr.2019.01.007.
Craik, F., Luo, L., & Sakuta, Y. (2010). Effects of aging and divided attention on memory for items and their contexts. Psychology and Aging, 25. https://doi.org/10.1037/a0020276.
Daneman, M., & Hannon, B. (2007). What do working memory span tasks like reading span really measure? In N. Osaka, R. H. Logie, & M. D’Esposito (Eds.), The Cognitive Neuroscience of Working Memory, (pp. 29–47). https://doi.org/10.1093/acprof:oso/9780198570394.003.0002.
Dermanova, I. B., & Manukyan, V. R. (2010). Personality maturity: To description of psychological phenomenon. Saint Petersburg University Bulletin, 4(12), 68–73.
Dixon, R. A., & Hultsch, D. F. (1983). Structure and development of metamemory in adulthood. Journal of Gerontology, 38(6), 682–688. https://doi.org/10.1093/geronj/38.6.682.
Dornyei, Z., & Taguchi, T. (2010). Questionnaires in second language research construction, administration and processing, (2nd ed., ). Routledge.
Double, K., & Birney, D. (2019). Reactivity to measures of metacognition. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.02755.
Dunlosky, J., & Bjork, R. A. (2008). The integrated nature of metamemory and memory. In J. Dunlosky, & R. A. Bjork (Eds.), Handbook of metamemory and memory, (pp. 11–28). Psychology Press.
Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Sage Publications.
Dunlosky, J., & Thiede, K. W. (2013). Metamemory. In D. Reisberg (Ed.), Oxford library of psychology, (pp. 283–298). Oxford University Press.
Dunning, D. L., & Holmes, J. (2014). Does working memory training promote the use of strategies on untrained working memory tasks? Memory and Cognition, 42(6), 854–862. https://doi.org/10.3758/s13421-014-0410-5.
Durand López, E. M. (2021). A bilingual advantage in memory capacity: Assessing the roles of proficiency, number of languages acquired and age of acquisition. International Journal of Bilingualism, 25(3), 606–621. https://doi.org/10.1177/1367006920965714.
Efklides, A. (2009). The role of metacognitive experiences in the learning process. Psicothema, 21, 76–82.
Einstein, G., & Mcdaniel, M. (2007). Prospective memory and metamemory: The skilled use of basic attentional and memory processes. Psychology of Learning and Motivation, 48, 145–173. https://doi.org/10.1016/S0079-7421(07)48004-5.
Elimam, A., & Chilton, P. (2018). The paradoxical hybridity of words. Language and Cognition, 10(2), 208–233. https://doi.org/10.1017/langcog.2017.20.
Ellah, B., Achor, E., & Enemarie, V. (2019). Problem-solving skills as correlates of attention span and working memory of low ability level students in senior secondary schools. Journal of Education and e-Learning Research, 6(3), 135–141. https://doi.org/10.20448/journal.509.2019.63.135.141.
Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage.
Gathercole, S. E., Dunning, D. L., Holmes, J., & Norris, D. (2019). Working memory training involves learning new skills. Journal of Memory and Language, 105, 19–42. https://doi.org/10.1016/j.jml.2018.10.003.
Hair Jr., J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis, (7th ed., ). Prentice Hall.
Han, F. (2020). Self-regulated metacognitive and cognitive strategy use in L2 reading among high-, moderate-, and low-achieving readers. In E. Balashov (Ed.), Self-regulated learning, cognition and metacognition, (pp. 145–164). Nova Science.
Heckathorn, D. D. (2002). Respondent-driven sampling II: Deriving valid estimates from chain-referral samples of hidden populations. Social Problems, 49(1), 11–34. https://doi.org/10.1525/sp.2002.49.1.11.
Hertzog, C., & Curley, T. (2018). Metamemory and cognitive aging. Oxford Research Encyclopedia of Psychology, 1–37. https://doi.org/10.1093/acrefore/9780190236557.013.377.
Horn, J. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. https://doi.org/10.1007/BF02289447.
Hu, X., Luo, L., & Fleming, S. M. (2019). A role for metamemory in cognitive offloading. Cognition, 193, 104012. https://doi.org/10.1016/j.cognition.2019.104012.
Huff, M., & Bodner, G. (2013). When does memory monitoring succeed versus fail? Comparing item-specific and relational encoding in the DRM paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(4), 1246–1256. https://doi.org/10.1037/a0031338.
Hui, B. S., & Wold, H. (1982). Consistency and consistency at large of partial least squares estimates. In K. G. Jöreskog, & H. Wold (Eds.), Systems under indirect observation: Part II, (pp. 119–130).
Hultsch, D. F., Hertzog, C., Dixon, RA., & Davidson, H. (1988). Memory self-knowledge and self-efficacy in the aged. In M.L. Howe & C.J. Brainerd (Eds.) Cognitive development in adulthood: Progress in cognitive development research (pp. 65–92). Springer-Verlag.
Janes, J., Rivers, M., & Dunlosky, J. (2018). The influence of making judgments of learning on memory performance: Positive, negative, or both? Psychonomic Bulletin & Review, 25(6), 2356–2364. https://doi.org/10.3758/s13423-018-1463-4.
Kaiser, H. F. (1970). A second generation little jiffy. Psychometrika, 35(4), 401–415. https://doi.org/10.1007/BF02291817.
Kazi, S., Kazali, E., Makris, N., Spanoudis, G., & Demetriou, A. (2019). Cognizance in cognitive development: A longitudinal study. Cognitive Development, 52. https://doi.org/10.1016/j.cogdev.2019.100805.
Kelley, P., & Whatson, T. (2013). Making long-term memories in minutes: A spaced learning pattern from memory research in education. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00589.
Kilic, A., Criss, A. H., Malmberg, K. J., & Shiffrin, R. M. (2017). Models that allow us to perceive the world more accurately also allow us to remember past events more accurately via differentiation. Cognitive Psychology, 92, 65–86. https://doi.org/10.1016/j.cogpsych.2016.11.005.
Kline, R. B. (2011). Methodology in the social sciences: Principles and practice of structural equation modelling, (3rd ed., ). Guilford Press.
Kline, R. B. (2016). Principles and practice of structural equation modeling, (4th ed., ). Guilford Press.
Klingberg, T. (2010). Training and plasticity of working memory. Trends in Cognitive Sciences, 14(7), 317–324. https://doi.org/10.1016/j.tics.2010.05.002.
Kwan, J. L. Y., & Chan, W. (2011). Comparing standardized coefficients in structural equation modelling: A model reparameterization approach. Behavior Research Methods, 43(3), 730–745. https://doi.org/10.3758/s13428-011-0088-6.
Laine, M., Fellman, D., Waris, O., & Nyman, T. J. (2018). The early effects of external and internal strategies on working memory-updating training. Scientific Reports, 8(1), 4045. https://doi.org/10.1038/s41598-018-22396-5.
Lee, S. Y. & Song, X. Y. (2004). Evaluation of the Bayesian and maximum likelihood approaches in analyzing structural equation models with small sample sizes. Multivariate Behavioral Research 39(4), 653–686.
Li, Q., Chen, X., Su, Q., Liu, S., & Huang, J. (2019). Adapt retrieval rules and inhibit already-existing world knowledge: Adjustment of world knowledge’s activation level in auditory sentence comprehension. Language and Cognition, 11(4), 645–668. https://doi.org/10.1017/langcog.2019.41.
Logan, J., Castel, A., Haber, S., & Viehman, E. (2012). Metacognition and the spacing effect: The role of repetition, feedback, and instruction on judgments of learning for massed and spaced rehearsal. Metacognition and Learning, 7(3), 175–195. https://doi.org/10.1007/s11409-012-9090-3.
Lohmöller, J. B. (1989). Latent variable path modelling with partial least squares. Physica-Verlag, Heidelberg. https://doi.org/10.1007/978-3-642-52512-4.
Maki, R. H., Willmon, C., & Pietan, A. (2009). Basis of metamemory judgments for text with multiple-choice, essay and recall tests. Applied Cognitive Psychology, 23(2), 204–222. https://doi.org/10.1002/acp.1440.
Mandolesi, L., Polverino, A., Montuori, S., Foti, F., Ferraioli, G., Sorrentino, P., & Sorrentino, G. (2018). Effects of physical exercise on cognitive functioning and wellbeing: Biological and psychological benefits. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.00509.
Margeaux, V. A., Ayanna, K. T., & Angela, H. G. (2017). Confidence moderates the role of control beliefs in the context of age-related changes in misinformation susceptibility, experimental. Aging Research, 43(3), 305–322. https://doi.org/10.1080/0361073X.2017.1298960.
Martinez, M. E. (2006). What is metacognition? Phi Delta Kappan, 87(9), 696–699. https://doi.org/10.1177/003172170608700916.
Mathieu, J. E., & Taylor, S. R. (2006). Clarifying conditions and decision points for mediational type inferences in organizational behaviour. Journal of Organizational Behavior, 27(8), 1031–1056. https://doi.org/10.1002/job.406.
McKinley, G. L., & Benjamin, A. (2020). The role of retrieval during study: Evidence of reminding from overt rehearsal. Journal of Memory and Language, 114, 104–128. https://doi.org/10.1016/j.jml.2020.104128.
Mieth, L., Schaper, M. L., Kuhlmann, B. G., & Bell, R. (2021). Memory and metamemory for social interactions: Evidence for a metamemory expectancy illusion. Memory & Cognition, 49(1), 14–31. https://doi.org/10.3758/s13421-020-01071-z.
Myers, S. J., Rhodes, M. G., & Hausman, H. E. (2020). Judgments of learning (JOLs) selectively improve memory depending on the type of test. Memory & Cognition, 48(5), 745–758. https://doi.org/10.3758/s13421-020-01025-5.
Nakata, T. (2015). Are learners aware of effective ways to learn second language vocabulary from retrieval? Perceived effects of relative spacing, absolute spacing, and feedback timing on vocabulary learning. Vocabulary Learning and Instruction, 4, 66–73.
Oberauer, K., Süß, H.-M., Wilhelm, O., & Wittmann, W. W. (2008). Which working memory functions predict intelligence? Intelligence, 36(6), 641–652. https://doi.org/10.1016/j.intell.2008.01.007.
Osborne, J. W., Costello, A. B., & Kellow, J. T. (2008). Best practices in exploratory factor analysis. In J. W. Osborne (Ed.), Best practices in quantitative methods, (pp. 205–213). Sage Publishing. https://doi.org/10.4135/9781412995627.d18.
Park, S., Ryu, S. H., Yoo, Y., Yang, J. J., Kwon, H., Youn, J. H., … Lee, J.-Y. (2018). Neural predictors of cognitive improvement by multi-strategic memory training based on metamemory in older adults with subjective memory complaints. Scientific Reports, 8(1), 1095. https://doi.org/10.1038/s41598-018-19390-2.
Peng, P., & Fuchs, D. (2017). A randomized control trial of working memory training with and without strategy instruction: Effects on young children’s working memory and comprehension. Journal of Learning Disabilities, 50(1), 62–80. https://doi.org/10.1177/0022219415594609.
Peng, Z., Cimin, D., Ba, Y., Zhang, L., Yongcong, S., & Tian, J. (2020). Effect of sleep deprivation on the working memory-related N2-P3 components of the event-related potential waveform. Frontiers in Neuroscience, 14. https://doi.org/10.3389/fnins.2020.00469.
Pfenninger, S., & Singleton, D. (2019). A critical review of research relating to the learning, use and effects of additional and multiple languages in later life. Language Teaching, 52(4), 419–449. https://doi.org/10.1017/S0261444819000235.
Putnam, A. (2015). Mnemonics in education: Current research and applications. Translational Issues in Psychological Science, 1. https://doi.org/10.1037/tps0000023.
Rakow, T., Newell, B. R., & Zougkou, K. (2010). The role of working memory in information acquisition and decision-making: Lessons from the binary prediction task. Quarterly Journal of Experimental Psychology, 63(7), 1335–1360. https://doi.org/10.1080/17470210903357945.
Ralby, A., Mentzelopoulos, M., & Cook, H. (2017). Learning languages and complex subjects with memory palaces. In D. Beck et al. (Eds.), Immersive Learning Research Network. Communications in Computer and Information Science, (pp. 217–228). Springer. https://doi.org/10.1007/978-3-319-60633-0_18.
Rankin, T. (2017). (Working) memory and L2 acquisition and processing. Second Language Research, 33(3), 389–399. https://doi.org/10.1177/0267658316645387.
Riegel, M., Wierzba, M., Grabowska, A., Jednoróg, K., & Marchewka, A. (2015). Effect of emotion on memory for words and their context. The Journal of Comparative Neurology, 524(8), 1636–1645. https://doi.org/10.1002/cne.23928.
Robinson, P. (2017). Attention and Awareness. In J. Cenoz, D. Gorter, & S. May (Eds.), Language awareness and multilingualism. Encyclopaedia of language and education, (3rd ed., ). Springer. https://doi.org/10.1007/978-3-319-02240-6_8.
Royle, J., & Lincoln, N. B. (2008). The everyday memory questionnaire-revised: Development of a 13-item scale. Disability & Rehabilitation, 30(2), 114–121. https://doi.org/10.1080/09638280701223876.
Salmi, J., Nyberg, L., & Laine, M. (2018). Working memory training mostly engages general-purpose large-scale networks for learning. Neuroscience & Biobehavioral Reviews, 83(3), 108–122. https://doi.org/10.1016/j.neubiorev.2018.03.019.
Schaper, M. L., & Bayen, U. (2021). The metamemory expectancy illusion in source monitoring affects metamemory control and memory. Cognition, 206, 104468. https://doi.org/10.1016/j.cognition.2020.104468.
Small, G., Lee, J., Kaufman, A., Jalil, J., Siddarth, P., Gaddipati, H., … Bookheimer, S. (2020). Brain health consequences of digital technology use. Dialogues in Clinical Neuroscience, 22, 179–187. https://doi.org/10.31887/DCNS.2020.22.2/gsmall.
Son, L. (2010). Metacognitive control and the spacing effect. Journal of experimental psychology. Learning, memory, and cognition, 36. https://doi.org/10.1037/a0017892.
Sosik, J. J., Kahal, S. S., & Piovoso, M. J. (2009). Silver bullet or voodoo statistics? A primer for using the partial least squares data analytic technique in group and organization research. Group and Organization Management, 34(1), 5–36. https://doi.org/10.1177/1059601108329198.
Spanoudis, G., & Demetriou, A. (2020). Mapping mind-brain development: Towards a comprehensive theory. Journal of Intelligence, 8(2), 19. https://doi.org/10.3390/jintelligence8020019.
Staniloiu, A., & Markowitsch, H. (2019). Episodic memory is emotionally laden memory, requiring amygdala involvement. Behavioural and Brain Sciences, 42, E299. https://doi.org/10.1017/S0140525X19001857.
Stone, N. J. (2000). Exploring the relationship between calibration and self-regulated learning. Educational Psychology Review, 12(4), 437–475. https://doi.org/10.1023/A:1009084430926.
Strobach, T., & Schubert, T. (2016). Video game training and effects on executive functions. In T. Strobach, & J. Karbach (Eds.), Cognitive training: An overview of features and applications, (pp. 117–125). Springer International Publishing AG. https://doi.org/10.1007/978-3-319-42662-4_11.
Susser, J. A., & Mulligan, N. W. (2019). Exploring the intrinsic-extrinsic distinction in prospective metamemory. Journal of Memory and Language, 104, 43–55. https://doi.org/10.1016/j.jml.2018.09.003.
Tamminen, J., Newbury, C., Crowley, R., Vinals, L., Cevoli, B., & Rastle, K. (2020). Generalisation in language learning can withstand total sleep deprivation. Neurobiology of Learning and Memory, 173. https://doi.org/10.1016/j.nlm.2020.107274.
Tauber, S. K., & Rhodes, M. G. (2012). Measuring memory monitoring with judgements of retention (JORs). Quarterly Journal of Experimental Psychology, 65(7), 1376–1396. https://doi.org/10.1080/17470218.2012.656665.
Tonković, M., & Vranić, A. (2011). Self-evaluation of memory systems: Development of the questionnaire. Aging & Mental Health, 15(7), 830–837. https://doi.org/10.1080/13607863.2011.569483.
Troyer, A. K., Leach, L., Vandermorris, S., & Rich, J. B. (2019). Measuring metamemory in diverse populations and settings: A systematic review and meta-analysis of the multifactorial memory questionnaire. Memory, 27(7), 931–942. https://doi.org/10.1080/09658211.2019.1608255.
Van Ede, D. M., & Coetzee, C. H. (1996). The Metamemory, Memory Strategy and Study Technique Inventory (MMSSTI): A factor analytic study. South African Journal of Psychology, 26(2), 89–95. https://doi.org/10.1177/008124639602600204.
Wang, J., & Wang, X. (2020). Structural equation modeling: Applications using Mplus, (2nd ed., ). Wiley.
Wang, L. Y., & Lin, T. B. (2013). The representation of professionalism in native English-speaking teacher’s recruitment policies: A comparative study of Hong Kong, Japan, Korea and Taiwan. English Teaching, 12(3), 5–22.
Wang, Y. A., & Rhemtulla, M. (2021). Power analysis for parameter estimation in structural equation modeling: A discussion and tutorial. Advances in Methods and Practices in Psychological Science., 4(1), 251524592091825. https://doi.org/10.1177/2515245920918253.
Wass, S., Scerif, G., & Johnson, M. (2012). Training attentional control and working memory - Is younger, better? Developmental Review, 32(4), 360–387. https://doi.org/10.1016/j.dr.2012.07.001.
Zalbidea, J., & Sanz, C. (2020). Does learner cognition count on modality? Working memory and L2 morphosyntactic achievement across oral and written tasks. Applied Psycholinguistics, 41(5), 1171–1196. https://doi.org/10.1017/S0142716420000442.
Zhang, D., & Zhang, L. J. (2019). Metacognition and self-regulated learning (SRL) in second/foreign language teaching. In X. Gao (Ed.), Second handbook of English language teaching. Springer International Handbooks of Education, (pp. 1–15). Springer.
The authors wish to thank the student teachers for participating in the present study. The authors would also like to express their gratitude to the private language institute owners who allowed the lead researcher to collect the data for the study. Last but not least, thanks go to the two anonymous reviewers who provided us with constructive feedback to improve the qulaity of our paper.
There was no funding for this research.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Nour, P., Esfandiari, R. & Zarei, A.A. Development and validation of a metamemory maturity questionnaire in the context of English as a foreign language. Lang Test Asia 11, 24 (2021). https://doi.org/10.1186/s40468-021-00141-6