43
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A talkative Potts attractor neural network welcomes BLISS words

      abstract
      1 , , 1
      BMC Neuroscience
      BioMed Central
      Twenty First Annual Computational Neuroscience Meeting: CNS*2012
      21-26 July 2012

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Neuroscientists have observed that the human brain is comprised of neurons. We have observed that babies start speaking at an early age, yet no young animals, including pets, have so far been seen to speak, at least not in the articulated fashion of human babies. To understand this highly cognitive ability, many psycholinguistic data have been gathered, from behavioral, to neurolinguistic, to recent neuroimaging studies, each measuring macroscopic properties of the brain. Nevertheless, the challenging question remains unanswered of how such complicated behavior emerges from the microscopic (or mescoscopic) properties of individual neurons and of networks of neurons in the brain. We would like to tackle this question by developing and analyzing a Potts attractor neural network model, whose units hypothetically represent patches of the cortex. The network has the ability to spontaneously hop (or latch) across memory patterns (which have been stored as dynamical attractors), thus producing an infinite sequence of patterns, at least in some regimes [1]. We would like to train the network with a corpus of sentences in BLISS [2]. BLISS is a scaled-down synthetic language of intermediate complexity, with about 150 words and about 40 rewrite rules. We expect the Potts network to generate sequences of memorized words, with statistics reflecting to some degree that of the BLISS corpus used in training it. Before training the network on the corpus, the critical issues to be addressed, and the central ones here, are: how should the words be represented in a cognitively plausible manner in the network? how should the correlation between words, in terms of both meaning and statistical dependences, be reflected in their (neural) representations? how should two main characteristics of a word, the meaning (semantic) and the syntactic properties, be represented in the network? We represent words in a distributed fashion on 900 units, 541 out of which express the semantic content and the rest, 359 units, are representative of the syntactic characteristics of a word. The distinction between the semantic and syntactic characteristics of a word has been loosely inspired by a vast number of neuropsychological studies [3]. Further, several findings have indicated a distinction between the encoding of function words (i.e. prepositions, conjunctives, determiners, etc.) and content words (i.e. nouns, verbs, adjectives, ...) in the brain [4]. To implement a plausible model of the variable degree of correlation between word representations, we have used an algorithm comprised of two steps [5]: first, a number of vectors, called factors, are established, each factor influencing the activation of some of the units, by "suggesting" a particular state; second, the competition among these factors determines the activation state of each unit of a word. The preliminary analysis of the produced patterns indicates the resemblance between the statistics of the representation of words and the patterns that can generate the latching behavior of the network. This is a promising step towards building a neural network that can spontaneously generate a sequence of words (sentences) with desired syntactic and semantic relationships between words in sentences.

          Related collections

          Most cited references2

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          BLISS: an artificial language for learnability studies

          To explore neurocognitive mechanisms underlying the human language faculty, cognitive scientists use artificial languages to control more precisely the language learning environment and to study selected aspects of natural languages. The aim of the present study is the construction of an artificial Basic Language Incorporating Syntax and Semantics, BLISS, which mimics natural languages by possessing a vocabulary, syntax, and semantics. BLISS is generated by a context-free grammar of limited complexity with about 40 production rules, with probabilities that were drawn from the Wall Street Journal corpus. The BLISS vocabulary contains about 150 words which belong to different lexical categories such as noun, verb, adjective, etc., and which were selected from the Shakespeare corpus. Semantics was defined as the dependence of each word on preceding words in the same sentence, purely determined by imposing constraints on word choice during sentence generation. Based on the different algorithms which were applied for the selection of a new word, 4 alternative language models, 3 semantics and one no-semantics, were constructed: Exponential, Subject-Verb, Verb-Subject, and No-Semantics. To measure the effect of introducing semantics to BLISS, the distances between the distributions of consecutive word-pairs in corpora generated by the different language models were measured using Kullback-Leibler (KL) divergence. However, so as to measure purely semantics effect, firstly we attempted to eliminate the effect of word frequency by producing corpora with close word frequencies. Next, looking at the KL-divergences of the distributions of word-pairs, we observed that all the three semantics models are relatively far from No-Semantics one; the Verb-Subject model shows a different kind of dependence between words while the Subject-Verb and Exponential models represent very similar dependence. Furthermore, if we increase the effect of preceding words in word choice, through a parameter in the semantics models, the distances of the semantic models from the no-semantics one considerably increase, underscoring the effect of introducing semantics to the language.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            BLISS: an Artificial Language for Learnability Studies

              Bookmark

              Author and article information

              Conference
              BMC Neurosci
              BMC Neurosci
              BMC Neuroscience
              BioMed Central
              1471-2202
              2012
              16 July 2012
              : 13
              : Suppl 1
              : P21
              Affiliations
              [1 ]Cognitive Neuroscience Sector, SISSA, Trieste, 34136, Italy
              Article
              1471-2202-13-S1-P21
              10.1186/1471-2202-13-S1-P21
              3403524
              c272db68-b230-4556-bb96-7b4397de5fa4
              Copyright ©2012 Pirmoradian and Treves; licensee BioMed Central Ltd.

              This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

              Twenty First Annual Computational Neuroscience Meeting: CNS*2012
              Decatur, GA, USA
              21-26 July 2012
              History
              Categories
              Poster Presentation

              Neurosciences
              Neurosciences

              Comments

              Comment on this article