Purpose – The purpose of the study is to review readability formulae and offer a critique, based on a comparison of the grading of a variety of texts given by six well-known formulae. Methodology – A total of 64 texts in English were selected either by or for native English speaking children aged between six and 11 years. Each text was assessed using six commonly used readability formulae via the Words Count website (http://www.wordscount. info/) which provides automated readability indices using FOG, Spache, SMOG, Flesh-Kincaid and Dale-Chall. For the ATOS formula, the Renaissance Learning website was used (http://www. renlearn.com/ar/overview/atos/). Statistical tests were then carried out to check the consistency among the six formulae in terms of their predictions of levels of text diffi culty. Findings – The analysis demonstrated significantly different readability indices for the same text using different formulae. It appeared that some of the formulae (but not all) were consistent in their ranking of texts in order of difficulty but were not consistent in their grading of each text. This finding suggests that readability formulae need to be used carefully to support teachers’ judgements about text difficulty rather than as the sole mechanism for text assessment. Significance – Making decisions about matching texts to learners is something regularly required from teachers at all levels. Making such decisions about text suitability is described as measuring the ‘readability’ of texts, and for a long time, this measurement has been treated as unproblematic and achieved using formulae which use such features as vocabulary diffi culty and sentence length. This study suggests that the use of such readability formulae is more problematic than may at first appear. Although the study was carried out with native English speaking children using texts in English, it is argued that the lessons learnt apply equally to Malay speakers reading Malay language texts.