Automated Essay Grading?
The next advancement in the grading of standardized tests is already underway with a new initiative to automate the assessment of essays. The William and Flora Hewlett Foundation, an organization that offers grants for efforts in education, environmentalism and global development, is planning to award $100,000 to the designers of a software that will accurately and automatically grade student essays for state tests. This competition will be an effort to streamline the scoring of standardized tests by reducing the costs and minimizing the time it takes to score student essays. The expensive and time consuming task of grading essays by hands has led many schools across the country to exclude such assignments from standardized tests. Multiple-choice questions are simple to grade automatically, but basing a test solely on these kinds of questions does not provide a full assessment of a student’s skills and capabilities. Essays are an important way to gauge critical reasoning and writing proficiency, which are integral to academic development. The first part of the competition was held in January and featured the vendors of software that has already been developed to demonstrate the efficiency of their products in grading essays. The second part of the competition, which runs through April, extended involvement to the public, so that independent software developers can compete as well. Open Education Solutions and the Common Pool designed the logistics of the competition, which will be hosted on Kaggle, a platform for predictive modeling competitions that enables organizations to solve problems by posting them as competitions to a network of data scientists who work to find solutions. Supporting the Hewlett Foundation in this competition are the Partnership for Assessment of Readiness for College and Careers, the Smarter Balanced Assessment Consortium and 44 departments of education across the country. In total, the effort has received $365 million from the U.S. Department of Education. In upholding their support of “deeper learning,” which promotes critical reasoning and problem solving as key to the mastery of core academic subjects, the Hewlett Foundation’s competition will determine how effective these programs are at scoring essays and if they are as effective as manual grading. If a software program proves to be faster and more accurate, it will facilitate the inclusion of essays on more state tests across the country. Barbara Chow, education program director at the Hewlett foundation, said of the competition: “Better tests support better learning. Rapid and accurate automated essay scoring will encourage states to include more writing in their state assessments. And the more we can use essays to assess what students have learned, the greater the likelihood they’ll master important academic content, critical thinking and effective communication.”