第一图书网

英语读写结合写作试题效度验证

张新玲 上海大学出版社
出版时间:

2009-1  

出版社:

上海大学出版社  

作者:

张新玲  

页数:

299  

前言

  The present study, within Messicks unitary validity conception, collects theoretical and empirical evidence for the substantive and generalizability aspects of construct validity of the texr-based writing task in National Matriculation English Test (Guangdong Version), a newly-designed large-scale high-stakes test. It adopted a constructivist reading-to-write model specifying the metacognitive (planning and monitoring ) and cognitive (selecting, organizing, and integrating) operations elicited in text-based writing. Three general research questions are generated: 1) whether the theoretical processes are actually tagged by the assessment task; 2) whether the two sub-tasks manifest the text-based writing construct differently; and 3 ) whether performance regularities entail suitability of the text-based writing task for NMET (GD).  Data were drawn from different sources via instruments constructed for this study. In response to the first two general research questions, questionnaire data from experts (N = 25), the instructors (N = 150), and the target candidates (N = 532) were collected. In addition, students (n = 36 ) interview data complemented the questionnaire data qualitatively. And the aggregation of the foregoing qualitative data, the coding and the rating results of 189 compositions responded to the third research question.

内容概要

本研究依托Messick的效度整体观,从构念效度的实质和构念效度的外推力两个方面入手,为广东省高考英语读写结合写作题型收集效度证据。 本研究运用多个研究工具、从多角度收集了理论和实证证据。分析发现,考生完成梗概和回应性议论文的写作过程有所不同,由于梗概写作的内容构建比回应性议论文写作复杂,所以计划在梗概写作中显得相对重要;相应地,回应性议论文在语言产出上要求相对高一些,因此,考生更重视对回应性议论文的监控。

书籍目录

Chapter 1 Introduction 1.1 Rationale of the Present Study 1.1.1 General Background of the Present Study 1.1.2 Relevant Studies on Text-BasedWriting Tasks 1.2 Key Research Questions 1.3 Definitions of Key Terms 1.4 Contents of the Book 1.5 SummaryChapter 2 Literature Review 2.1 Introduction 2.2 Writing Assessment: A Historical View 2.3 Orientation to English Teaching Objectives and NMET(GD) 2.3.1 The Curriculum and the Teaching Objectives 2.3.2 An Introduction to Writing Tests in NMET(GD) 2.4 Theoretical Conceptualization of Text-Based Writing 2.4.1 The Reading Perspective 2.4.2 The Writing Perspective 2.4.3 The Integrated Perspective 2.5 Conceptualization of Text-Based Writing Test Tasks 2.5.1 Advantages of Text-Based Writing Tasks 2.5.2 Problems with Text-Based Writing Tasks 2.6 Factors Influencing Students' Text-Based Writing Performance 2.6.1 The Task Factors 2.6.2 The Individual Factors 2.6.3 The Writing Process 2.7 Textual Measures of Text-Based Writing 2.7.1 Content 2.7.2 Organization 2.7.3 Language 2.8 Important Validation Research on Text'Based Writing Tests 2.9 Theory of Validity and Validation 2.9.1 Messick's Unitary Validity Concept 2.9.2 Validation Procedures in General 2.10 SummaryChapter 3 Theoretical Framework 3.1 Introduction 3.2 Text-Based Writing Revisited 3.2.1 Definition of Text-Based Writing Construct 3.2.2 The Text-Based Writing Process Model 3.3 Task Complexity of Summary and Response Argumentation 3.4 Measures of Writing Products ~ 3.4.1 Language 3.4.2 Content and Coherence 3.5 Validation Process of the Present Study 3.5.1 The Blend of Validity Evidence of the Present Study 3.5.2 Validation Procedures in Action 3.6 Restatement of Research Questions 3.7 SummaryChapter 4 Methodology 4.1 Introduction 4.2 Participants 4.2.1 Student Participants 4.2.2 Instructors and Experts 4.2.3 Raters 4.3 Research Design 4.4 Instruments and Materials 4.4.1 Instructors' Attitude Questionnaire 4.4.2 Experts' Questionnaires 4.4.3 The Writing Task 4.4.4 The Coding Scheme for Students' Writings 4.4.5 The Rating Rubrics 4.4.6 Pilot Studies of Test Administration,Rating and Textual Coding 4.4.7 Students' Writing Process Questionnaire 4.4.8 The Interview 4.4.9 Instructors' Test-Preparation Questionnaire 4.5 Data Collection Procedures 4.5.1 Data Collection Procedures of the Instructors' Questionnaires 4.5.2 Data Collection Procedures of the Experts' Questionnaires 4.5.3 Collection of Student Data 4.6 Data Preparations 4.6.1 Numeric Data Generation 4.6.2 Data Entry and Missing Data Handling 4.7 Data Analyses 4.8 SummaryChapter 5 Results 5.1 Introduction 5.2 Preliminary Data Analyses 5.2.1 Reliability Concerns of the Instruments 5.2.2 Validity of Questionnaires 5.2.3 Tests of Normality of the Rating Scores 5.3 Results for Research Question 1 5.3.1 Findings of Experts' Questionnaire Data 5.3.2 Findings of Instructors' Questionnaire Data 5.3.3 Findings of Students' Questionnaire Data 5.3.4 Sub-Conclusion 5.4 Results for Research Question 2 5.4.1 Results for Research Question 2: the Writing Process Perspective 5.4.2 Results for Research Question 2: the Writing Product Perspective 5.4.3 Sub-Conclusion 5.5 Results for Research Question 3 5.5.1 Results for Research Question 3 ~ Attitude and Acceptability Aspects 5.5.2 Results for Research Question 3 : Reliability and Rasch Analysis Aspects 5.5.3 Results for Research Question 3 : Test Score Aspect 5.5.4 Results for Research Question 3 : Textual Coding Data Aspect 5.5.5 Sub-Conclusion 5.6 SummaryChapter 6 Discussions 6.1 Introduction 6.2 Evidence for the Substantive Aspect of Construct Validity 6.2.1 Match of the Construct. Explanation of Reliable Variance 6.2.2 An Anatomy of the Construct in Relation to the Task Further Explanation of Reliable Variance 6.3 Evidence for the Generalizability Aspect of Construct Validity 6.3.1 Generalizability: Suitability 6.3.2 Generalizability: More Evidence 6.4 General Discussion 6.5 SummaryChapter 7 Conclusions 7.1 Introduction 7.2 Major Findings of the Present Study 7.3 Implications of the Present Study 7.3.1 Theoretical Implications 7.3.2 Assessment and Pedagogical Implications 7.4 Limitations of the Present Study 7.5 Directions for Further Studies 7.6 SummaryReferencesAppendixAppendix A The Text-Based Writing TaskAppendix B The Cited Phrases for Textual CodingAppendix C Cohesive Ties for Textual CodingAppendix D Experts' Questionnaire OneAppendix E Instructors' Attitude QuestionnaireAppendix F Experts' Questionnaire TwoAppendix G The Rating RubricsAppendix H Instructors' Test-Preparation QuestionnaireAppendix I The Rating PlanAppendix J Students' Writing Process QuestionnaireAppendix K Students' Interview OutlineAppendix L Examinee Measurement Report of Rasch AnalysisAppendix M Communalities of Factor AnalysisAppendix N Excerpts of the InterviewAppendix O Graphic Examples of Textual Coding

章节摘录

  The major findings of Rasch analyses are as follows. The analysis for the examinee facet has demonstrated that the text-based writing test task satisfactorily defined the students English writing ability with discriminating power, thus the inferences based on test scores can be easily supported, which has important implications for our understanding of the fairness of the assessment process ( McNamara, 1996: 138 ). Moreover, the test task was comparatively difficult to the subject candidates, which can be interpreted from two perspectives. First of all, the students may not be familiar with the text-based writing task as it is comparatively new to them. Task familiarity has been reported to influence students writing performance; inferior performance may be attributed partly to candidates unfamiliarity with the test task (Weigle, 2004). Second, the finding that the task is relatively tougher for the candidates in the present study reflects their low English proficiency and English writing ability. On second thought, as the average test score reached 55 points out of a total of 100, and the test task could discriminate among candidates, the test task functioned well as a measure of the writing ability concerned. Conclusively, the findings from the results of the examinee facet imply that the text-based writing task can appropriately sample candidates text-based writing ability. This can in turn serve for the primary and secondary purposes of a language test: to make inferences of the traits measured and to make decisions concerning the test-takers based on the inferences.


编辑推荐

  《英语读写结合写作试题效度验证:以广东省英语高考考题为例》依托Messick的效度整体观,从构念效度的实质和构念效度的外推力两个方面入手,为广东省高考英语读写结合写作题型收集效度证据。

图书封面

广告

下载页面


英语读写结合写作试题效度验证 PDF格式下载



好难懂,不适合高中生看,买错了。


这本书不是很适合学生,购买请注意!


相关图书