69
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Planning chemical syntheses with deep neural networks and symbolic AI

      , ,
      Nature
      Springer Nature

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          To plan the syntheses of small organic molecules, chemists use retrosynthesis, a problem-solving technique in which target molecules are recursively transformed into increasingly simpler precursors. Computer-aided retrosynthesis would be a valuable tool but at present it is slow and provides results of unsatisfactory quality. Here we use Monte Carlo tree search and symbolic artificial intelligence (AI) to discover retrosynthetic routes. We combined Monte Carlo tree search with an expansion policy network that guides the search, and a filter network to pre-select the most promising retrosynthetic steps. These deep neural networks were trained on essentially all reactions ever published in organic chemistry. Our system solves for almost twice as many molecules, thirty times faster than the traditional computer-aided search method, which is based on extracted rules and hand-designed heuristics. In a double-blind AB test, chemists on average considered our computer-generated routes to be equivalent to reported literature routes.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: found
          • Article: not found

          Mastering the game of Go with deep neural networks and tree search.

          The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The ORCA program system

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A Survey of Monte Carlo Tree Search Methods

                Bookmark

                Author and article information

                Journal
                Nature
                Nature
                Springer Nature
                0028-0836
                1476-4687
                March 28 2018
                March 28 2018
                : 555
                : 7698
                : 604-610
                Article
                10.1038/nature25978
                29595767
                79dec48c-59f4-4138-ae28-df90150937fc
                © 2018
                History

                Comments

                Comment on this article