Current neurobiological accounts of language and cognition offer diverging views on the questions of ‘where’ and ‘how’ semantic information is stored and processed in the human brain. Neuroimaging data showing consistent activation of different multi‐modal areas during word and sentence comprehension suggest that all meanings are processed indistinctively, by a set of general semantic centres or ‘hubs’. However, words belonging to specific semantic categories selectively activate modality‐preferential areas; for example, action‐related words spark activity in dorsal motor cortex, whereas object‐related ones activate ventral visual areas. The evidence for category‐specific and category‐general semantic areas begs for a unifying explanation, able to integrate the emergence of both. Here, a neurobiological model offering such an explanation is described. Using a neural architecture replicating anatomical and neurophysiological features of frontal, occipital and temporal cortices, basic aspects of word learning and semantic grounding in action and perception were simulated. As the network underwent training, distributed lexico‐semantic circuits spontaneously emerged. These circuits exhibited different cortical distributions that reached into dorsal‐motor or ventral‐visual areas, reflecting the correlated category‐specific sensorimotor patterns that co‐occurred during action‐ or object‐related semantic grounding, respectively. Crucially, substantial numbers of neurons of both types of distributed circuits emerged in areas interfacing between modality‐preferential regions, i.e. in multimodal connection hubs, which therefore became loci of general semantic binding. By relating neuroanatomical structure and cellular‐level learning mechanisms with system‐level cognitive function, this model offers a neurobiological account of category‐general and category‐specific semantic areas based on the different cortical distributions of the underlying semantic circuits.