Chinese university students’ perceptions of assessment tasks and classroom assessment environment
© Cheng et al. 2015
Received: 13 April 2015
Accepted: 6 August 2015
Published: 1 September 2015
Classroom assessment tasks and environment are central to supporting student learning, yet are under-studied at the tertiary level, especially in China’s test-driven culture. This study explores the relationship between students’ perceptions of assessment tasks and classroom assessment environment, within the university context of teaching English as a foreign language (EFL) in China. A questionnaire was designed and administered, based on Dorman & Knightley’s (2006) Perceptions of Assessment Tasks Inventory (PATI) and Alkharusi’s (2011) scale, in order to measure students’ perceptions of the classroom assessment environment. PATI includes five subscales: congruence with planned learning, authenticity, student consultation, transparency, and diversity. Alkharusi’s scale comprises two subscales: learning-oriented classroom assessment environment and performance-oriented classroom assessment environment. Participants were 620 university students from three universities in China. The factor analysis findings identified the original five-factor PATI, and Alkharusi’s two-factor scales within this Chinese research context. Multiple regression analyses exploring the interrelationship showed that congruence with planned learning, authenticity, student consultation, and transparency significantly predicted the learning-oriented classroom assessment environment, explaining 48 % of the variance. Congruence with planned learning and student consultation negatively, and diversity positively, predicted the performance-oriented classroom assessment environment, explaining 12 % of the variance. The findings highlight the two core values in classroom assessment tasks: congruence with planned learning and student consultation in mediating the classroom environment. This study addresses the research gap in our limited understanding of the relationship between classroom assessment tasks and assessment environment, and aids teachers in structuring their day-to-day classroom assessment practices in support of their students’ learning.
Assessment plays a central role in teaching and learning. It has long been used in education for various high stakes decision-making purposes, e.g., selection and placement, guiding and correcting learning, and grading achievement (Crooks 1988). The prevalence of large-scale high-stakes testing, and its impact on stakeholders, has been well documented in education. There are a set of relationships, intended and unintended, between testing, teaching and learning. However, not until recently has research on classroom assessment begun to explore the role that assessment tasks that teachers use, and that the classroom environment plays in supporting student learning.
The classroom assessment environment (Stiggins & Conklin 1992) is an important part of the classroom atmosphere. The ways teachers communicate their expectations to students, and the ways they provide feedback on how well these expectations are being met, help students form concepts of what is important to learn and how good they are at learning it. The importance of the classroom assessment environment and the particular classroom assessment tasks within that environment signify a salient area of research to study. These include (a) student perceptions of the tasks, their interest and importance; (b) student self-efficacy to accomplish them and their reasons for doing so; and (c) the goal orientations at the task level. Without knowing the relationship between classroom assessment tasks and classroom assessment environment, especially from the students’ perspective, it is not possible for us to know how teacher assessment practices are supporting student learning.
As part of the worldwide movement to combine assessment of learning with assessment for learning, in order to promote student learning, the Chinese Ministry of Education developed the College English Curriculum Requirements (CECR) (CMoE 2004; 2007) to introduce this balanced assessment concept. The stipulated College English assessment framework is taking a two-pronged approach: Firstly, the original external examination system, the College English Test (CET) testing system, which almost all undergraduate students in China are required to take, is being reformed in a substantial way (Jin 2005). Secondly, formative assessment elements are to be incorporated into the existing summative assessment framework (Wang 2007).
Teachers and students are now caught between a long examination history for selection purposes and continued large-scale national English language testing, and this recent curriculum reform promoting formative classroom assessment practices. Recent research also demonstrates that translating the CECR formative assessment initiative into classroom practices is a complex issue that involves many factors such as teachers’ beliefs, students’ perceptions, institutional differences, and educational tradition (Chen et al. 2013). Concern remains as to whether this worldwide (‘foreign’) assessment movement works within the local Chinese university context, which is highlighted as test-centered, textbook-centered, and teacher-centered (Cheng 2010). China has a long history of using large-scale testing, which still enjoys widespread acceptance and recognition as a fair measurement for selecting the best talents. Within such a highly summative assessment context, the success of the balanced assessment framework will foremost involve an informed understanding of assessment tasks and classroom assessment environment. Within this context, we conducted an empirical study to explore Chinese university students’ perception of assessment tasks and their classroom assessment environment.
Nature of assessment tasks and assessment environment
A substantial proportion of classroom time involves exposing students to a variety of assessment tasks (Stiggins & Conklin 1992). As students process these tasks, they develop beliefs about their importance, utility, value, and difficulty. The characteristics of the assessment tasks, that is, how these tasks are conducted in the classroom as perceived by the students, are central to the understanding of the quality of student learning (Dorman et al. 2006; Watering et al. 2008). Since students are direct participants in the assessment process, their perceptions of the assessment tasks are the foundation of successful formative assessment (Brookhart 2013).
Dorman and Knightley (2006) developed and validated an instrument to investigate secondary school students’ perceptions of assessment tasks along five dimensions: congruence with planned learning, authenticity, student consultation in the assessment process, transparency about the purposes and forms of the assessment, and diversity. This study argues for more research identifying the perceived characteristics of assessment tasks supportive of a classroom environment that is conducive to enhanced student learning. The nature of the tasks, the environment in which they are undertaken, and how these tasks and their environment are perceived significantly affect the depth of student engagement (Fox et al. 2001; Lizzio & Wilson 2013).
The classroom environment is the overall sense or meaning that students develop from the various assessment tasks (Brookhart & DeVoge 1999). Brookhart (2004) rightly points out that each classroom has its own assessment environment perceived by the students while their teacher establishes assessment purposes, assigns assessment tasks, sets performance criteria and standards, gives feedback, and monitors students’ progress. Alkharusi (2011) investigated students’ perception of the classroom environment and established two facets: learning-oriented, focusing on assessment practices to enhance student learning; and performance-oriented, focusing on grading and comparing students’ learning. However, despite the increasing literature on classroom assessment practices (MacMillan 2013), research on the relationship between assessment tasks and the classroom assessment environment perceived by students is still lacking.
Among the limited existing empirical studies, Alkharusi et al. (2014a) examined the inter-correlations between students’ perceptions of the assessment tasks and the classroom assessment environment. Their focus, however, was on the gender difference in a research context where male and female students were educated separately, rather than coeducationally, in the Middle East. They found statistically significant gender differences concerning the perceptions of the assessment tasks and classroom assessment environment. For both male and female students, learning-oriented classroom assessment environment was congruently correlated with planned learning, authenticity, student consultation, transparency, and diversity. Performance-oriented classroom assessment environment, however, was only correlated with student consultation among females while it was associated with all assessment task variables among males.
In the same research context, Alkharusi et al. (2014b) investigated how teachers’ classroom assessment practices and students’ perceptions of classroom assessment tasks related to student academic self-efficacy by collecting data from 1, 457 secondary school students and 99 teachers. They found that students’ perceptions of classroom assessment tasks all had significant positive influences on their academic self-efficacy. This outcome indicates that there exists a relationship between the assessment tasks and the assessment environment that supports student learning.
Wang & Cheng, 2010 explored the relationship between students’ perceptions of the classroom assessment environment and their goal orientations among 503 first-year Chinese EFL university students. She found that they perceived their classroom assessment environment to be learning-oriented, test-oriented, and praise-oriented. Students’ perception of the assessment environment as being learning-oriented positively predicted their adoption of mastery goals, whereas perceptions of the assessment environment as being test-oriented or praise-oriented positively predicted their adoption of performance goals.
How do students perceive their classroom assessment tasks and environment?
What is the relationship, if any, between the assessment tasks and the classroom assessment environment?
Are there significant differences in students’ perceptions of assessment tasks and classroom assessment environment by university, subject major and self-perceived language proficiency?
Humanities and Social Sciences
The questionnaire used in this study is designed based on Dorman & Knightley’s (2006) perceptions of assessment tasks inventory (PATI) (35 items), and Alkharusi’s (2011) assessment environment (16 items). We chose these two instruments based on their theoretical grounding and the psychometric quality. The Cronbach alpha of PATI is ranging from 0.85 to 0.63 (Dorman & Knightley 2006, p. 54), and .82 and .75 respectively for learning-oriented and performance-oriented classroom assessment environment (Alkharusi 2011). These 51 items constitute the two major sections of the questionnaire. Section 1 on PATI consists of 5 scales: congruence with planned learning, i.e., the extent to which assessment tasks align with the goals, objectives, and activities of the learning program (items 1–7); authenticity, i.e., the extent to which assessment tasks feature real-life situations (items 8–14); student consultation, i.e., the extent to which students are consulted and informed about the forms of assessment tasks being employed (items 15–21); transparency, i.e., the extent to which the purposes and forms of assessment tasks are well-defined and clear to the learner (items 22–28); and diversity, i.e., the extent to which all students have an equal chance at completing assessment tasks (items 29–35).
Section 2 Classroom Assessment Environment consists of two scales: learning-oriented assessment environment, focusing on classroom assessment practices that improve student learning and mastery of content materials (items 1–9); and performance-oriented assessment environment, focusing on harshness of assessment, grading, public evaluation and recognition practices (items 10–16). The third section of the questionnaire consists of seven demographical items to collect information on the participants’ gender, age, home province, years of learning English, years at university, major area of study, and perceived level of English proficiency. These items provide the participants’ contextual information and their English learning background.
The questionnaire was pre-tested on five Chinese students studying in a Canadian university. As a result, the directions were made clearer and ambiguous items were revised. The questionnaire was then translated into Chinese by an experienced translator and double-checked by a researcher proficient in both English and Chinese. In addition, pilot tests were conducted with a small group of Chinese students (n = 4) whose characteristics were similar to the survey participants. Consequently, three items were revised to improve clarity and idiomatic expressions to Chinese participants. Questionnaires were then administered, in the summer of 2013, to 652 students from a comprehensive university in Anhui (East China), a foreign language studies university in Guangdong (South China), and a polytechnic university in Chongqing (Southwest China). Altogether, 637 students responded to the questionnaire – a return rate of 97.7 %. The data were entered into SPSS 20 independently for each university, and then merged into one data set.
The data were first checked for missing values. Cases were deleted if more than 10 % of the questionnaire items were not completed, which resulted in 10 cases being removed, and 610 cases retained for subsequent analysis. The remaining missing values were construed to have been omitted at random and were replaced by means. Data were then analyzed in four phases: First, means and standard deviations were calculated with regard to participants’ perceptions of the classroom assessment tasks and the classroom assessment environment, in order to answer the first research question.
Second, exploratory factor analysis was run to detect the latent factor structures from Dorman & Knightley’s (2006) PATI and Alkharusi’s (2011) assessment environment scale. Principal component analysis was employed to extract the factors primarily because this method was used in the original validation processes for both instruments, and is also widely used in the language assessment literature (Ockey 2014). To improve the interpretability of the factor extraction results, direct oblimin was selected since oblique rotation allows the factors to be correlated, which is most often the case in social sciences (Bentler 2008). This choice was also supported by previous studies showing that all subscales of the PATI are moderately or highly correlated (Dorman & Knightley 2006), and learning-oriented assessment environment and performance-oriented assessment environment are moderately correlated among both male and female students (Alkharusi et al. 2014a). To determine the items retained for further analysis, this study adopted Tabachnick and Fidell’s (2001) suggestion that .32 be a good rule of thumb for the lowest factor loadings.
Third, stepwise multiple regressions were conducted to examine the relationship between the perceived classroom assessment tasks and the perceived classroom assessment environment. This analysis, together with the above factor analysis, was undertaken to answer the second research question. Students’ perceptions of the classroom assessment tasks were used as independent variables (predictors) and their perceptions of the classroom assessment environment were treated as dependent variables.
Finally, one way ANOVAs and t-tests were conducted to investigate the differences of students’ perceptions of assessment tasks and classroom assessment environment among the three participating universities, the four subject majors included, and students with differing levels of self-perceived language proficiency – the third research question.
Perceptions of classroom assessment tasks
The range of the standard deviation (SD) of all the items is between .84 and 1.13. The greatest SD is item 24 (“I know in advance how I will be assessed”) (SD = 1.13), followed by another two relatively large SDs: items 20 (SD = 1.08) and 35 (SD = 1.07). Item 20 states “My teacher has explained to me how each form of assessment is used”, and item 35 asserts “I do work that is different from other students’ work”. The smallest SD is item 5 (“I am assessed in similar ways to the tasks I do in class”) (SD = .84), followed by the next three smallest SDs: items 13 (SD = .85), 14 (SD = .85) and 12 (SD = .86). Item 13 says “Assessment in English tests my ability to apply learning”, item 14 states “Assessment in English examines my ability to answer important questions”, and item 12 affirms “English assessment tasks check my understanding of topics”. These items (12, 13, 14) are in the scale of authenticity. In addition, many items’ SDs in the scale of student consultation (5 out of 7), transparency (6 out of 7), and diversity (5 out of 7) are above 1.00, which indicates that students’ perspectives on these items are more varied.
Perceptions of classroom assessment environment
Exploratory factor analysis
Principal component analysis with direct oblimin rotation of perceptions of assessment task inventory
My assessment in English is a fair indicator of my work
My English tests are a fair indicator of what my class is trying to learn.
My assessment is a fair indication of what I do in English
I am assessed in similar ways to the tasks I do in class.
I am assessed on what the teacher has taught me.
I have answered English questions on topics that have been covered in class.
My assignments are related to what I am learning in English.
I am set assessment tasks that are different from other students tasks.
I use different assessment methods from other students.
I do work that is different from other students work.
I have helped the class develop rules for assessment in English.
I am given a choice of assessment tasks.
I select how I will be assessed in English.
I am given assessment tasks that suit my ability.
I am told in advance when I am being assessed.
I am told in advance on what I am being assessed.
I am told in advance why I am being assessed.
I know in advance how I will be assessed.
I understand the purpose of English assessment.
When I am faster than others, I move on to new assessment tasks.
I complete assessment tasks at my own speed.
I am clear about the forms of assessment being used.
I am aware of the types of assessment in English.
My teacher has explained to me how each form of assessment is used.
I understand what is needed in all English assessment tasks.
I know what is needed to successfully accomplish an English assessment task.
I am asked about the types of assessment I would like to have in English.
I ask my teacher about English assessment.
I am asked to apply my learning to real-life situations.
My English assessment tasks are meaningful.
I find English assessment tasks relevant to the real world.
Assessment in English tests my ability to apply learning.
My English assessment tasks are useful.
English assessment tasks check my understanding of topics.
Assessment in English examines my ability to answer important questions.
Principal component analysis with direct oblimin rotation of perceived classroom assessment environment
I am given a chance to correct my mistakes.
I receive continuous feedback from my teacher about my performance in English.
Our teacher helps us identify the places where we need more effort in future.
The assignments and tests encourage thinking.
Our teacher uses a variety of ways (e.g., tests, in-class tasks, homework assignments) to assess our mastery of English.
The assignments and activities are related to our every day lives.
Our teacher holds us the responsibility to learn.
I can find out my strengths in English.
The assignments and tests are returned in a way that keeps our scores private.
Our teacher gives more importance to the grades than to the learning.
There is a mismatch between the learned subject materials and the assigned homework and tests.
Our teacher’s grading system is not clear.
The assessment results do not fairly reflect the effort put in us studying the subject.
Our teacher compares our performances to each other.
The in-class and homework assignments are not interesting.
The tests and assignments are difficult to us.
The internal consistency of all PATI subscales reached acceptable level, ranging from .82 to .86 (see Table 4), according to George and Mallery (2003) who anchored the internal consistency in the following way: excellent (α ≥ .9), acceptable (.9 > α ≥ .8), good (.8 > α ≥ .7), questionable (.7 > α ≥ .6), poor (.6 > α ≥ .5), and unacceptable (.5 > α). The reliability coefficients of the two assessment environment subscales were respectively .82 for learning-oriented assessment environment and .70 for performance-oriented assessment environment (see Table 5). These exploratory factor analysis results were used for the following regression analysis.
Correlations between classroom assessment tasks and environment
1. Congruence with Planned Learning
5. Student Consultation
6. Learning-oriented Assessment Environment
7. Performance-oriented Assessment Environment
Stepwise regression analysis results (learning-oriented assessment environment as dependent variable)
Adj. R 2
95.0 % CI
Congruence with planned learning
Stepwise regression analysis results (performance-oriented assessment environment as dependent variable)
Adj. R 2
95.0 % CI
Congruence with planned learning
Not surprisingly, while diversity was a significant predictor of performance-oriented assessment environment, it was removed from the equation when learning-oriented assessment environment was used as the dependent variable. Similarly, while authenticity and transparency were significant in predicting learning-oriented assessment environment, they were removed when using performance-oriented assessment environment as the dependent variable. In addition, the co-occurrence of congruence with planned learning and student consultation showed that these two factors positively predicted learning-oriented assessment environment, and negatively predicted performance-oriented assessment environment.
One-way ANOVA and T-tests
Post hoc investigation on the differences between universities
95 % CI
Congruence with planned learning
Post hoc investigation on the differences between majors
95 % CI
Humanities and social sciences
Business and management
Humanities and social sciences
Business and management
Finally, we compared students with different levels of self-perceived language proficiency to determine if there was significant variation in their perceptions of classroom assessment tasks and environment. We compared two groups – students with low and medium language proficiency – through a T-test. We excluded the group of students with high language proficiency as the sample size was very small (3 %). Levene tests showed that all the variables met the assumptions. T-test results showed that students with medium language proficiency perceived transparency in the classroom assessment tasks significantly higher than students with low language proficiency did [t(579) = 2.79, p < .01].
Discussions and conclusions
This study addresses the research gap resulting from the lack of empirical research on student perceptions of their classroom assessment tasks and environment, and the relationship between these tasks and environment as it relates to student learning within the context of Chinese EFL tertiary education. A number of findings, as reported above, have enhanced our understanding of the nature of classroom assessment within this context, and identified areas for further research.
Nature of assessment tasks and environment
Our results from the descriptive analysis showed that the participants of this study perceived their assessment tasks as being highly related to what they were learning in their English classes. This demonstrates a match between assessment and learning from these students’ viewpoint. However, these students were seldom involved in developing the criteria for assessment, which has been empirically supported by previous studies (Cheng et al. 2004; Wang et al. 2013). Involving students in developing the criteria for assessment is a step further in supporting their learning. The process of setting goals can clarify the process in reaching the learning goals. In terms of the classroom assessment environment, students strongly perceived that their teachers held them responsible for learning. This result has also been supported through the discussion of the role that teachers play in student learning in and around the Asian context (Carless 2011). Although these students did not identify a mismatch between what they learned and what was assessed, they stated that their assessment results did not fairly reflect the effort they put in. This outcome echoed what we found regarding the assessment tasks, and pointed to an aspect of assessment environment that needs further research. We need to know how efforts are included in the assessment results within this context and listen to students’ voices regarding assessment of their learning. After all, the involvement of student learning including perceptions of their learning is an indicator of quality classroom assessment.
Relationship between assessment tasks and environment
In order to explore the relationship between assessment tasks and classroom environment, exploratory factor analyses were conducted and identified the original five-factor of PATI and Alkharusi’s two-factor scales on learning- and performance-oriented environment. This demonstrates the robustness of the instruments in another research context and at different level of learning (secondary school vs. higher education), especially for the PATI scale on assessment tasks.
Multiple regression analyses showed that congruence with planned learning, authenticity, student consultation, and transparency significantly predicted the learning-oriented classroom assessment environment, explaining 48 % of the variance. Congruence with planned learning and student consultation negatively, while diversity positively, predicted the performance-oriented classroom assessment environment, explaining 12 % of the variance. This analysis highlights the two core values in classroom assessment tasks: congruence with planned learning and student consultation in mediating the classroom environment. These two assessment task variables had a medium correlation with each other. These two core values have been discussed in many educational assessment literatures (e.g., Barksdale-Ladd & Thomas 2000; Stiggins & Conklin 1992).
Our findings show that congruence with planned learning and student consultation are positive predictors of the learning-oriented classroom assessment environment and negative predictors of performance-oriented classroom assessment environment at the same time. This implies that aligning assessment tasks with the goals and objectives of the learning programs and effectively informing students regarding how they will be assessed potentially have twofold benefits. On the one hand, they may contribute to an environment where students focus on learning and mastery. On the other hand, they may potentially prevent the development of an environment where students compare themselves against each other. Furthermore, our findings also show that assigning authentic assessment tasks and clarifying the purposes and forms of these tasks may also help foster a learning-oriented assessment environment in the classroom.
Interestingly, diversity is the only predictor (among the above three statistically significant predictors) that had a positive relationship with performance-orientated assessment environment. Diversity had a small correlation with the other four assessment tasks (from .32 to .41), while each of these had medium correlations with one another (from .50 to .68). This relationship might have something to do with the results from the second regression model. This finding challenges the existing research literature, which concludes that the extent to which all students have an equal chance at completing assessment tasks, helps create a fair environment (Taylor & Nolen 2008), yet it is not clear as to whether this supports students’ attention on mastery or performance.
This finding may be explained by the context in which this research was conducted. Due to the nature of the Chinese examination-centered educational system, students in China tend to put an emphasis on scores and rankings (Guo 2012; Kirkpatrick & Zang 2011). It is possible that diversity in classroom assessment, such as the extent to which all students have an equal chance at completing assessment tasks, may have potentially intensified their performance-oriented assessment environment. When students are given an equal chance at completing assessment tasks, they might tend to be more competitive, which would contribute to the performance-oriented assessment environment. Future research is needed to arrive at a better interpretation on how and why diversity is associated with performance-oriented classroom assessment environment in the Chinese educational context.
The results derived from students’ perceptions of assessment tasks and classroom assessment environment among the three participating universities, the four subject majors, and students with differing levels of self-perceived language proficiency demonstrated that significant differences exist in terms of the universities, the subject majors and how students perceived their own English language proficiency. Again congruence with planning learning and student consultation (along with transparency) were the variables signifying the differences. We are not able to explain the reasons between the differences, but it seems clear to us that we need to conduct further research into the macro-societal context in addition to the micro-instructional/university assessment context. We recognize that this study is an initial exploratory effort into a restricted sample of Chinese tertiary students from three universities. Many of the findings provide new insights into the nature of classroom assessment, yet at the same time point out aspects of this environment that cannot be tapped into through a survey study of this nature. A follow-up study using a qualitative research approach could enhance our understanding of classroom assessment within the Chinese EFL tertiary context.
The research findings have pedagogical implications for teachers structuring their day-to-day classroom assessment practices. To create a learning-oriented assessment environment where students focus on mastery learning, teachers are expected to align the classroom assessment tasks with the learning goals of the program, maximize the transferability of the knowledge and skills assessed in the tasks to real-life situations, and define and clarify the assessment purposes and forms before the assessment tasks are assigned to students. In addition, teachers are encouraged to consult with students about what forms of assessment tasks will be used. However, teachers need be cautious when giving students equal opportunities to complete these tasks at various speeds. Efforts need to be made so as to minimize the possibility of creating an environment where students focus on mutual comparison and higher grades rather than self-improvement and meaningful learning.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Alkharusi, H. (2011). Development and datametric properties of a scale measuring students’ perceptions of the classroom assessment environment. International Journal of Instruction, 4(1), 105–120.Google Scholar
- Alkharusi, H., Aldhafri, S., Alnabhani, H., & Alkalbani, M. (2014a). Modeling the Relationship between perceptions of assessment tasks and classroom assessment environment as a function of gender. The Asia-Pacific Educational Researcher, 23(1), 93–104.View ArticleGoogle Scholar
- Alkharusi, H., Aldhafri, S., Alnabhani, H., & Alkalbani, M. (2014b). Classroom assessment: Teaching practices, student perceptions, and academic self-efficacy beliefs. Social Behavior and Personality, 42(5), 835–856.View ArticleGoogle Scholar
- Barksdale-Ladd, M.A, & Thomas, K.F. (2000). What’s at stake in high-stakes testing: teachers and parents speak out. Journal of Teacher Education, 51, 384–397.View ArticleGoogle Scholar
- Bentler, P.M. (2008). EQS program manual. Encino, CA: Multivariate Analysis.Google Scholar
- Brookhart, S.M. (2004). Classroom assessment: Tensions and intersections in theory and practice. Teachers College Record, 106(3), 429–458.View ArticleGoogle Scholar
- Brookhart, S.M. (2013). Classroom assessment in the context of motivation theory and research. In JH MacMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 35–54). Los Angeles: Sage.View ArticleGoogle Scholar
- Brookhart, S.M., & DeVoge, J.G. (1999). Testing a theory about the role of classroom assessment in student motivation and achievement. Applied Measurement in Education, 12, 409–425.View ArticleGoogle Scholar
- Carless, D. (2011). From testing to productive student learning: Implementing formative assessment in Confucian-heritage settings. New York: Routledge.Google Scholar
- Chen Q., May L., Klenowski V., Kettle M. (2013). The enactment of formative assessment in English language classrooms in two Chinese universities: Teacher and student responses. Assessment in Education: Principles, Policy & Practice. doi:10.1080/0969594X.2013.790308
- Cheng, L. (2010). The history of examinations: Why, how, what, whom to select? In L Cheng & A Curtis (Eds.), English language assessment and the Chinese learner (pp. 13–26). ). New York: Routledge: Taylor’s & Francis Group.Google Scholar
- Cheng, L., Rogers, T., & Hu, H. (2004). ESL/EFL instructors’ classroom assessment practices: Purposes, methods and procedures. Language Testing, 21(3), 360–389.View ArticleGoogle Scholar
- Cheng, L., Rogers, T., & Wang, X. (2008). Assessment purposes and procedures in ESL/EFL classrooms. Assessment & Evaluation in Higher Education, 33(1), 9–32.View ArticleGoogle Scholar
- CMoE document. (2004). College English Curriculum Requirements (trial), retrieved from http://www.edu.cn/20040120/3097997.shtml
- CMoE document. (2007). College English Curriculum Requirements, retrieved from http://www.moe.gov.cn/publicfiles/business/htmlfiles/moe/moe_1846/201011/xxgk_110825.html
- Crooks, T.J. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438–481.View ArticleGoogle Scholar
- Dorman, J.P., & Knightley, W.M. (2006). Development and validation of an instrument to assess secondary school students’ perceptions of assessment tasks. Educational Studies, 32, 47–58.View ArticleGoogle Scholar
- Dorman, J.P., Fisher, D.L., & Waldrip, B.G. (2006). Classroom environment, students’ perceptions of assessment, academic efficacy and attitude to science: A lisrel analysis. In D Fisher & MS Khine (Eds.), Contemporary approaches to research on learning environment: Worldviews (pp. 1–28). Australia: World Scientific Publishing.View ArticleGoogle Scholar
- Fox, R.A., McManus, I.C., & Winder, B.C. (2001). The shortened study process questionnaire: An investigation of its structure and longitudinal stability using confirmatory factor analysis. British Journal of Educational Psychology, 71, 511–530.View ArticleGoogle Scholar
- George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference 11.0 update (4th ed.). Boston: Allyn & Bacon.Google Scholar
- Guo, L. (2012). New curriculum reform in China and its impact on teachers. Canadian and International Education, 41(2), 87–105.Google Scholar
- Jin, Y. (2005). CET4/6 reform framework and prospect. China College Teaching (Zhongguo Daxue Jiaoxue), 5, 49–53.Google Scholar
- Kirkpatrick, R., & Zang, Y. (2011). The negative influences of exam-oriented education on Chinese high school students: Backwash from classroom to child. Language Testing in Asia, 1(3), 37–45.Google Scholar
- Lizzio, A., & Wilson, K. (2013). First-year students’ appraisal of assessment tasks: implications for efficacy, engagement and performance. Assessment & Evaluation in Higher Education, 38(4), 389–406.View ArticleGoogle Scholar
- MacMillan, J.H. (2013). SAGE handbook of research on classroom assessment. Los Angeles: Sage.Google Scholar
- Ockey, G.J. (2014). Exploratory factor analysis and structural equation modeling. In AJ Kunnan (Ed.), The companion to language assessment (pp. 1–21). Chichester, West Sussex: John Wiley & Sons, Inc. doi:10.1002/9781118411360wbcla114.Google Scholar
- Stiggins, R.J., & Conklin, N.F. (1992). In teachers’ hands: Investigating the practices of classroom assessment. Albany: State University of New York Press.Google Scholar
- Tabachnick, B.G., & Fidell, L.S. (1989). Using multivariate statistics (2nd ed.). New York, NY: HarperCollins College Publishers.Google Scholar
- Tabachnick, B.G., & Fidell, L.S. (2001). Using multivariate statistics. Boston: Allyn and Bacon.Google Scholar
- Taylor, C.S, & Nolen, S.B. (2008). Classroom assessment: Supporting teaching and learning in real classrooms. Upper Saddle River, NJ: Pearson.Google Scholar
- Wang, X., & Cheng, L. (2010). Chinese EFL students’ perceptions of the classroom assessment environment and their goal orientations. In L. Cheng & A. Curtis (Eds.), English language assessment and the Chinese learner (pp. 202–218). New York, NY: Routledge.Google Scholar
- Wang, Q. (2007). The national curriculum changes and their effects on English language teaching in the People's Republic of China. In J Cummins & C Davison (Eds.), International handbook of English language teaching (Vol. 1, pp. 87–105). New York: Springer.View ArticleGoogle Scholar
- Wang, W., Zeng, Y., He, H. (2013). Students’ perceptions of the effects of rubricreferenced self-assessment on EFL writing: A developmental perspective. Paper presented at 35th Language Testing Research Colloquium. Seoul, South Korea, retrieved from http://www.ltrc2013.or.kr/download/LTRC2013Program0729.pdf
- Watering, G., Gijbels, D., Dochy, F., & Rijt, J. (2008). Students’ assessment preferences, perceptions of assessment and their relationships to study results. Higher Education, 56, 645–658.View ArticleGoogle Scholar