This week’s articles focus on writing assessment. I will start by saying that this is certainly a topic that I deal with within my own program. The constant chatter is that of assessment. How do we measure achievement in writing?
Huot begins with exactly what the educational system has been dealing with for quite some time: researchers have been struggling with the development of ways to produce reliable and valid means of assessing writing quality. Huot provides a review of the literature that addresses the relatively new way of assessing student writing. Huot emphasizes that the primary means of assessment for quality of writing revolve around two different readers, often English instructors, arriving at a specific quality of rating. I wanted to particularly address this issue as does Huot. As a former high school composition instructor and literature instructor, we had to prepare students for what is known as the high school proficiency exam. These exams are assessed in the same way Huot describes. However, the issue I had, and as Huot points out through the described research is that the raters are often influenced by expectation and the reading process in general. A lot of educational instruction revolves around preparing students for said tests, but when it comes to composition, how is an instructor able to prepare students when ideally multiple factors will no doubt influence the readers. But it seems, as Huot points out, the primary focus of assessment of writing will continue to turn to content and organization. Personally, it would be interesting to see the future studies on how readers arrive at their decisions. Right now, the HS proficiency exam utilizes the IOVC (Ideas, Organization, Voice, and Conventions) rubric to assess their test. As instructors, we were to emphasize this to the students. I still am not sure how I feel about it.
White’s article addresses the scoring of portfolios. He argues that portfolios should not utilize the holistic approach for assessment as portfolios are not designed to be scored that way. Typically, White argues, holistic scoring is used for specific types of writing. Because portfolios primarily are made up of a long term process, the overall reflection from the student about the process should be held in high regard. I actually agree with White in this case because it seems that many college programs and some high school culminating experience programs have shifted their focus to utilizing portfolios as a means of assessment, particularly in the English and education programs. I don’t think that holistic scoring would actually do their long term efforts justice as a means of assessment. White asserts that clear expectations and requirements should be articulated to students from the beginning so students have a clear understanding of the assessment process; reflection playing a large role. I am not actually sure how many of the programs that use portfolios as a means of assessment actually assess them so it would be interesting to investigate this. The Master’s program transitioned into the electronic portfolio process shortly after I graduated so I never got the chance to participate in this process. I have never used portfolios as a teacher, but I can imagine there would be some constraints because of time etc.
Royer and Gilles’s focuses on a nontraditional approach to placing students into remedial courses, specifically that of the freshman comp class. Royer and Gilles suggest that students choose which composition class they feel they need to sign up for based on a self evaluation of their reading and writing ability (directed self-placement). For the most part, this method will no longer point the blaming finger at instructors and English departments if students do not achieve. Essentially, the idea is that students that self-place in the remedial composition courses are people that would have been placed there anyway. Typically students that have a low perception of their abilities, or actually have lower level abilities feel they would need assistance with writing and reading and wouldn’t feel the stigma of been classified as a remedial student; thus, student motivation is less likely negatively affected. I honestly wouldn’t mind see this play out. I think it would be a good idea to test this method in a smaller institution that has more wiggle room to test this theory.
In the Impact of the SAT and ACT timed writing test report, the overall essence was the concern NCTE had about the timed essay portion of the exam. Essentially, NCTE felt that this assessment would not be a good gauge of student writing ability. Ideally this article reiterates the problem that many educators have regarding standardized tests: teachers constantly have to teach to the test. I think the same can be said for the proficiency exam that addressed earlier. Until we have another form of assessment for writing ability or further research, it seems this will continue to be the case.
In the last article, An Apologia for the Timed Impromptu Essay Test, White encourages argues the benefits of the timed essay test. White argues that there is some credibility with these tests because they are actual writings from students rather than standardized multiple choice tests. I think White makes a good point because all too often with standardized test the object of the game is multiple choice everything and although the writing portion is timed and may not be a genuine reflection of student writing, at least it is a glimpse. Plus, graders of these tests take this into account, I hope anyway. Ultimately, like with any other types of assessment, we have to consider the context of the timed essay and what it is these tests aim to assess.
1 comment:
Good comments and I am glad you are interested in assessment. It is a burgeoning area and one who is knowledgeable in it can increase his or her marketability, i.e., think "dissertation topic" ding ding...(There is also a journal devoted to it in the field called "Assessing Writing" by the way.) The problem with holistic grading, even at the high school level, as the NCTE report states, is that it tends to "reify" (or solidify) the idea of the 5-paragraph essay as the only kind of writing that exists. Especially given the hard history of current-traditional writing instruction, this is like the last thing we should be doing particularly at the high school level. We should be doing more of the Downs and Wardle article approach, teaching students about the history of rhetoric, the variability of writing/discourse, etc. The problem is that open-ended approach gets infinitely more complicated to assess (and hence portfolios as the best but must resource intensive approach).
Post a Comment