Animals which need to see well at night generally have eyes with wide pupils. This optical strategy to improve photon capture may be improved neurally by summing the outputs of neighbouring visual channels (spatial summation) or by increasing the length of time a sample of photons is counted by the eye (temporal summation). These summation strategies only come at the cost of spatial and temporal resolution. A simple analytical model is developed to investigate whether the improved photon catch afforded by summation really improves vision in dim light, or whether the losses in resolution actually make vision worse. The model, developed for both vertebrate camera eyes and arthropod compound eyes, calculates the finest spatial detail perceivable by a given eye design at a specified light intensity and image velocity. Visual performance is calculated for the apposition compound eye of the locust, the superposition compound eye of the dung beetle and the camera eye of the nocturnal toad. The results reveal that spatial and temporal summation is extremely beneficial to vision in dim light, especially in small eyes (e.g. compound eyes), which have a restricted ability to collect photons optically. The model predicts that using optimum spatiotemporal summation the locust can extend its vision to light intensities more than 100,000 times dimmer than if it relied on its optics alone. The relative amounts of spatial and temporal summation predicted to be optimal in dim light depend on the image velocity. Animals which are sedentary and rely on seeing small, slow images (such as the toad) are predicted to rely more on temporal summation and less on spatial summation. The opposite strategy is predicted for animals which need to see large, fast images. The predictions of the model agree very well with the known visual behaviours of nocturnal animals.