To preserve multiple streams of independent information that converge onto a neuron, the information must be re-represented more efficiently in the neural response. Here we analyze the increase in the representational capacity of spike timing over rate codes using sound localization cues as an example.
The inferior colliculus receives convergent input from multiple auditory brainstem nuclei, including sound localization information such as interaural level differences (ILDs), interaural timing differences (ITDs), and spectral cues. Virtual space techniques were used to create stimulus sets varying in two sound-localization parameters each. Information about the cues was quantified using a spike distance metric that allows one to separate contributions to the information arising from spike rate and spike timing.
Spike timing enhances the representation of spectral and ILD cues at timescales averaging 12 ms. ITD information, however, is carried by a rate code. Comparing responses to frozen and random noise shows that the temporal information is mainly attributable to phase locking to temporal stimulus features, with an additional first-spike latency component. With rate-based codes, there is significant confounding of information about two cues presented simultaneously, meaning that the cues cannot be decoded independently. Spike-timing-based codes reduce this confounded information. Furthermore, the relative representation of the cues often changes as a function of the time resolution of the code, implying that information about multiple cues can be multiplexed onto individual spike trains.