User modeling is widely used in HCI but there are very few systematic HCI modelling tools for people with disabilities. We are developing user models to help with the design and evaluation of interfaces for people with a wide range of abilities. We present a perception model that can work for some kinds of visually-impaired users as well as for able-bodied people. The model takes a list of mouse events, a sequence of bitmap images of an interface and locations of different objects in the interface as input, and produces a sequence of eyemovements as output. Our model can predict the visual search time for two different visual search tasks with significant accuracy for both able-bodied and visuallyimpaired people.