Using human judgments to examine the validity of automated grammar, syntax, and mechanical errors in writing
DOI:
https://doi.org/10.17239/jowr-2019.11.02.01Keywords:
assessment, automatic writing evaluation, grammar, mechanics, natural language processing, writing qualityAbstract
This study introduces GAMET, which was developed to help writing researchers examine the types and percentages of structural and mechanical errors in texts. GAMET is a desktop application that expands LanguageTool v3.2 through a user-friendly, graphic user interface that affords the automatic assessment of writing samples for structural and mechanical errors. GAMET is freely available, works on a variety of operating systems, affords document batch processing, and groups errors into a number of structural and mechanical error categories. This study also tests LanguageTool’s validity using hand-coded assessment for accuracy and meaningfulness on first language (L1) and second language (L2) writing corpora. The study also examines how well LanguageTool replicates human coding of structural and mechanical errors in an L1 corpus as well as assesses associations between GAMET and human ratings of essay quality. Results indicate that LanguageTool can be used to successful locate errors within text. However, while the accuracy of LanguageTool is high, the recall of errors is low, especially in terms of punctuation errors. Nevertheless, the errors coded by LanguageTool show significant correlations with human ratings of writing and grammar and mechanics errors. Overall, the results indicate that while LanguageTool fails to flag a number of errors, the errors flagged provide an accurate profile of the structural and mechanical errors made by writers.
Published
Issue
Section
License
Copyright (c) 2019 Scott A. Crossley, Franklin Bradfield, Analynn Bustamante
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 Unported License.