This paper describes a text task in which children wrote their own stories in their regular school books, copied these stories onto digital paper using digital pens, had their handwritten stories recognized by the computer software, and then, looked at the text presented back to them and highlighted errors. There was considerable variability in the ability of the children to spot errors. Some children marked text as being wrong when in fact it was right. Most children spotted almost all the errors but were less likely to notice errors where incorrect but reasonable words had been presented back to them than where the words given by the recognizer were nonsense. In 13 instances, children had one of their own errors corrected by the interface but this was not noticed. The study highlighted several difficulties with the classification and reporting of errors in handwriting recognition based interfaces especially in the counting and classification of errors. Two new metrics for classifying and counting errors in this context are therefore proposed in this paper.