0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Automated Writing Evaluation System: Tapping its Potential for Learner Engagement

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: not found
          • Article: not found

          The feedback triangle and the enhancement of dialogic feedback processes

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Student engagement with teacher and automated feedback on L2 writing

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Complementing human judgment of essays written by English language learners with e-rater ® scoring

              E-rater ® is an automated essay scoring system that uses natural language processing techniques to extract features from essays and to model statistically human holistic ratings. Educational Testing Service has investigated the use of e-rater, in conjunction with human ratings, to score one of the two writing tasks on the TOEFL-iBT ® writing section. In this article we describe the TOEFL iBT writing section and an e-rater model proposed to provide one of two ratings for the Independent writing task. We discuss how the evidence for a process that uses both human and e-rater scoring is relevant to four components in a validity argument: (a) Evaluation — observations of performance on the writing task are scored to provide evidence of targeted writing skills; (b) Generalization — scores on the writing task provide estimates of expected scores over relevant parallel versions of the task and across raters; (c) Extrapolation — expected scores on the writing task are consistent with other measures of writing ability; and (d) Utilization — scores on the writing task are useful in educational contexts. Finally, we propose directions for future research that will strengthen the case for using complementary methods of scoring to improve the assessment of EFL writing.
                Bookmark

                Author and article information

                Journal
                IEEE Engineering Management Review
                IEEE Eng. Manag. Rev.
                Institute of Electrical and Electronics Engineers (IEEE)
                0360-8581
                1937-4178
                September 1 2018
                September 1 2018
                : 46
                : 3
                : 29-33
                Article
                10.1109/EMR.2018.2866150
                2d649773-8193-408e-bc18-7068b8d5c3f9
                © 2018
                History

                Comments

                Comment on this article