11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          There has been significant progress in understanding the parallelism inherent to iterative sequential algorithms: for many classic algorithms, the depth of the dependence structure is now well understood, and scheduling techniques have been developed to exploit this shallow dependence structure for efficient parallel implementations. A related, applied research strand has studied methods by which certain iterative task-based algorithms can be efficiently parallelized via relaxed concurrent priority schedulers. These allow for high concurrency when inserting and removing tasks, at the cost of executing superfluous work due to the relaxed semantics of the scheduler. In this work, we take a step towards unifying these two research directions, by showing that there exists a family of relaxed priority schedulers that can efficiently and deterministically execute classic iterative algorithms such as greedy maximal independent set (MIS) and matching. Our primary result shows that, given a randomized scheduler with an expected relaxation factor of \(k\) in terms of the maximum allowed priority inversions on a task, and any graph on \(n\) vertices, the scheduler is able to execute greedy MIS with only an additive factor of poly(\(k\)) expected additional iterations compared to an exact (but not scalable) scheduler. This counter-intuitive result demonstrates that the overhead of relaxation when computing MIS is not dependent on the input size or structure of the input graph. Experimental results show that this overhead can be clearly offset by the gain in performance due to the highly scalable scheduler. In sum, we present an efficient method to deterministically parallelize iterative sequential algorithms, with provable runtime guarantees in terms of the number of executed tasks to completion.

          Related collections

          Most cited references19

          • Record: found
          • Abstract: not found
          • Article: not found

          Cilk: An Efficient Multithreaded Runtime System

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            A lightweight infrastructure for graph analytics

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Randomized parallel algorithms for backtrack search and branch-and-bound computation

                Bookmark

                Author and article information

                Journal
                13 August 2018
                Article
                10.1145/3212734.3212756
                1808.04155
                4ce8b737-cef4-4f70-b9d0-d0155d5efc53

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                PODC 2018, pages 377-386 in proceedings
                cs.DS

                Data structures & Algorithms
                Data structures & Algorithms

                Comments

                Comment on this article