1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Determination of the phylogenetic origins of the Árpád Dynasty based on Y chromosome sequencing of Béla the Third

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We set out to identify the origins of the Árpád Dynasty based on genome sequencing of DNA derived from the skeletal remains of Hungarian King Béla III (1172–1196) and eight additional individuals (six males, two females) originally interred at the Royal Basilica of Székesfehérvár. Y-chromosome analysis established that two individuals, Béla III and HU52 assign to haplogroups R-Z2125 whose distribution centres near South Central Asia with subsidiary expansions in the regions of modern Iran, the Volga Ural region and the Caucasus. Out of a cohort of 4340 individuals from these geographic areas, we acquired whole-genome data from 208 individuals derived for the R-Z2123 haplogroup. From these data we have established that the closest living kin of the Árpád Dynasty are R-SUR51 derived modern day Bashkirs predominantly from the Burzyansky and Abzelilovsky districts of Bashkortostan in the Russian Federation. Our analysis also reveals the existence of SNPs defining a novel Árpád Dynasty specific haplogroup R-ARP. Framed within the context of a high resolution R-Z2123 phylogeny, the ancestry of the first Hungarian royal dynasty traces to the region centering near Northern Afghanistan about 4500 years ago and identifies the Bashkirs as their closest kin, with a separation date between the two populations at the beginning of the first millennium CE.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: not found

          Integrative Genomics Viewer

          To the Editor Rapid improvements in sequencing and array-based platforms are resulting in a flood of diverse genome-wide data, including data from exome and whole genome sequencing, epigenetic surveys, expression profiling of coding and non-coding RNAs, SNP and copy number profiling, and functional assays. Analysis of these large, diverse datasets holds the promise of a more comprehensive understanding of the genome and its relation to human disease. Experienced and knowledgeable human review is an essential component of this process, complementing computational approaches. This calls for efficient and intuitive visualization tools able to scale to very large datasets and to flexibly integrate multiple data types, including clinical data. However, the sheer volume and scope of data poses a significant challenge to the development of such tools. To address this challenge we developed the Integrative Genomics Viewer (IGV), a lightweight visualization tool that enables intuitive real-time exploration of diverse, large-scale genomic datasets on standard desktop computers. It supports flexible integration of a wide range of genomic data types including aligned sequence reads, mutations, copy number, RNAi screens, gene expression, methylation, and genomic annotations (Figure S1). The IGV makes use of efficient, multi-resolution file formats to enable real-time exploration of arbitrarily large datasets over all resolution scales, while consuming minimal resources on the client computer (see Supplementary Text). Navigation through a dataset is similar to Google Maps, allowing the user to zoom and pan seamlessly across the genome at any level of detail from whole-genome to base pair (Figure S2). Datasets can be loaded from local or remote sources, including cloud-based resources, enabling investigators to view their own genomic datasets alongside publicly available data from, for example, The Cancer Genome Atlas (TCGA) 1 , 1000 Genomes (www.1000genomes.org/), and ENCODE 2 (www.genome.gov/10005107) projects. In addition, IGV allows collaborators to load and share data locally or remotely over the Web. IGV supports concurrent visualization of diverse data types across hundreds, and up to thousands of samples, and correlation of these integrated datasets with clinical and phenotypic variables. A researcher can define arbitrary sample annotations and associate them with data tracks using a simple tab-delimited file format (see Supplementary Text). These might include, for example, sample identifier (used to link different types of data for the same patient or tissue sample), phenotype, outcome, cluster membership, or any other clinical or experimental label. Annotations are displayed as a heatmap but more importantly are used for grouping, sorting, filtering, and overlaying diverse data types to yield a comprehensive picture of the integrated dataset. This is illustrated in Figure 1, a view of copy number, expression, mutation, and clinical data from 202 glioblastoma samples from the TCGA project in a 3 kb region around the EGFR locus 1, 3 . The investigator first grouped samples by tumor subtype, then by data type (copy number and expression), and finally sorted them by median copy number over the EGFR locus. A shared sample identifier links the copy number and expression tracks, maintaining their relative sort order within the subtypes. Mutation data is overlaid on corresponding copy number and expression tracks, based on shared participant identifier annotations. Several trends in the data stand out, such as a strong correlation between copy number and expression and an overrepresentation of EGFR amplified samples in the Classical subtype. IGV’s scalable architecture makes it well suited for genome-wide exploration of next-generation sequencing (NGS) datasets, including both basic aligned read data as well as derived results, such as read coverage. NGS datasets can approach terabytes in size, so careful management of data is necessary to conserve compute resources and to prevent information overload. IGV varies the displayed level of detail according to resolution scale. At very wide views, such as the whole genome, IGV represents NGS data by a simple coverage plot. Coverage data is often useful for assessing overall quality and diagnosing technical issues in sequencing runs (Figure S3), as well as analysis of ChIP-Seq 4 and RNA-Seq 5 experiments (Figures S4 and S5). As the user zooms below the ~50 kb range, individual aligned reads become visible (Figure 2) and putative SNPs are highlighted as allele counts in the coverage plot. Alignment details for each read are available in popup windows (Figures S6 and S7). Zooming further, individual base mismatches become visible, highlighted by color and intensity according to base call and quality. At this level, the investigator may sort reads by base, quality, strand, sample and other attributes to assess the evidence of a variant. This type of visual inspection can be an efficient and powerful tool for variant call validation, eliminating many false positives and aiding in confirmation of true findings (Figures S6 and S7). Many sequencing protocols produce reads from both ends (“paired ends”) of genomic fragments of known size distribution. IGV uses this information to color-code paired ends if their insert sizes are larger than expected, fall on different chromosomes, or have unexpected pair orientations. Such pairs, when consistent across multiple reads, can be indicative of a genomic rearrangement. When coloring aberrant paired ends, each chromosome is assigned a unique color, so that intra- (same color) and inter- (different color) chromosomal events are readily distinguished (Figures 2 and S8). We note that misalignments, particularly in repeat regions, can also yield unexpected insert sizes, and can be diagnosed with the IGV (Figure S9). There are a number of stand-alone, desktop genome browsers available today 6 including Artemis 7 , EagleView 8 , MapView 9 , Tablet 10 , Savant 11 , Apollo 12 , and the Integrated Genome Browser 13 . Many of them have features that overlap with IGV, particularly for NGS sequence alignment and genome annotation viewing. The Integrated Genome Browser also supports viewing array-based data. See Supplementary Table 1 and Supplementary Text for more detail. IGV focuses on the emerging integrative nature of genomic studies, placing equal emphasis on array-based platforms, such as expression and copy-number arrays, next-generation sequencing, as well as clinical and other sample metadata. Indeed, an important and unique feature of IGV is the ability to view all these different data types together and to use the sample metadata to dynamically group, sort, and filter datasets (Figure 1 above). Another important characteristic of IGV is fast data loading and real-time pan and zoom – at all scales of genome resolution and all dataset sizes, including datasets comprising hundreds of samples. Finally, we have placed great emphasis on the ease of installation and use of IGV, with the goal of making both the viewing and sharing of their data accessible to non-informatics end users. IGV is open source software and freely available at http://www.broadinstitute.org/igv/, including full documentation on use of the software. Supplementary Material 1
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega

            Introduction Multiple sequence alignments (MSAs) are essential in most bioinformatics analyses that involve comparing homologous sequences. The exact way of computing an optimal alignment between N sequences has a computational complexity of O(L N ) for N sequences of length L making it prohibitive for even small numbers of sequences. Most automatic methods are based on the ‘progressive alignment' heuristic (Hogeweg and Hesper, 1984), which aligns sequences in larger and larger subalignments, following the branching order in a ‘guide tree.' With a complexity of roughly O(N 2), this approach can routinely make alignments of a few thousand sequences of moderate length, but it is tough to make alignments much bigger than this. The progressive approach is a ‘greedy algorithm' where mistakes made at the initial alignment stages cannot be corrected later. To counteract this effect, the consistency principle was developed (Notredame et al, 2000). This has allowed the production of a new generation of more accurate aligners (e.g. T-Coffee (Notredame et al, 2000)) but at the expense of ease of computation. These methods give 5–10% more accurate alignments, as measured on benchmarks, but are confined to a few hundred sequences. In this report, we introduce a new program called Clustal Omega, which is accurate but also allows alignments of almost any size to be produced. We have used it to generate alignments of over 190 000 sequences on a single processor in a few hours. In benchmark tests, it is distinctly more accurate than most widely used, fast methods and comparable in accuracy to some of the intensive slow methods. It also has powerful features for allowing users to reuse their alignments so as to avoid recomputing an entire alignment, every time new sequences become available. The key to making the progressive alignment approach scale is the method used to make the guide tree. Normally, this involves aligning all N sequences to each other giving time and memory requirements of O(N 2). Protein families with >50 000 sequences are appearing and will become common from various wide scale genome sequencing projects. Currently, the only method that can routinely make alignments of more than about 10 000 sequences is MAFFT/PartTree (Katoh and Toh, 2007). It is very fast but leads to a loss in accuracy, which has to be compensated for by iteration and other heuristics. With Clustal Omega, we use a modified version of mBed (Blackshields et al, 2010), which has complexity of O(N log N), and which produces guide trees that are just as accurate as those from conventional methods. mBed works by ‘emBedding' each sequence in a space of n dimensions where n is proportional to log N. Each sequence is then replaced by an n element vector, where each element is simply the distance to one of n ‘reference sequences.' These vectors can then be clustered extremely quickly by standard methods such as K-means or UPGMA. In Clustal Omega, the alignments are then computed using the very accurate HHalign package (Söding, 2005), which aligns two profile hidden Markov models (Eddy, 1998). Clustal Omega has a number of features for adding sequences to existing alignments or for using existing alignments to help align new sequences. One innovation is to allow users to specify a profile HMM that is derived from an alignment of sequences that are homologous to the input set. The sequences are then aligned to these ‘external profiles' to help align them to the rest of the input set. There are already widely available collections of HMMs from many sources such as Pfam (Finn et al, 2009) and these can now be used to help users to align their sequences. Results Alignment accuracy The standard method for measuring the accuracy of multiple alignment algorithms is to use benchmark test sets of reference alignments, generated with reference to three-dimensional structures. Here, we present results from a range of packages tested on three benchmarks: BAliBASE (Thompson et al, 2005), Prefab (Edgar, 2004) and an extended version of HomFam (Blackshields et al, 2010). For these tests, we just report results using the default settings for all programs but with two exceptions, which were needed to allow MUSCLE (Edgar, 2004) and MAFFT to align the biggest test cases in HomFam. For test cases with >3000 sequences, we run MUSCLE with the –maxiter parameter set to 2, in order to finish the alignments in reasonable times. Second, we have run several different programs from the MAFFT package. MAFFT (Katoh et al, 2002) consists of a series of programs that can be run separately or called automatically from a script with the --auto flag set. This flag chooses to run a slow, consistency-based program (L-INS-i) when the number and lengths of sequences is small. When the numbers exceed inbuilt thresholds, a conventional progressive aligner is used (FFT-NS-2). The latter is also the program that is run by default if MAFFT is called with no flags set. For very large data sets, the --parttree flag must be set on the command line and a very fast guide tree calculation is then used. The results for the BAliBASE benchmark tests are shown in Table I. BAliBASE is divided into six ‘references.' Average scores are given for each reference, along with total run times and average total column (TC) scores, which give the proportion of the total alignment columns that is recovered. A score of 1.0 indicates perfect agreement with the benchmark. There are two rows for the MAFFT package: MAFFT (auto) and MAFFT default. In most (203 out of 218) BAliBASE test cases, the number of sequences is small and the script runs L-INS-i, which is the slow accurate program that uses the consistency heuristic (Notredame et al, 2000) that is also used by MSAprobs (Liu et al, 2010), Probalign, Probcons (Do et al, 2005) and T-Coffee. These programs are all restricted to small numbers of sequences but tend to give accurate alignments. This is clearly reflected in the times and average scores in Table I. The times range from 25 min up to 22 h for these packages and the accuracies range from 55 to 61% of columns correct. Clustal Omega only takes 9 min for the same runs but has an accuracy level that is similar to that of Probcons and T-Coffee. The rest of the table is mainly taken by the programs that use progressive alignment. Some of these are very fast but this speed is matched by a considerable drop in accuracy compared with the consistency-based programs and Clustal Omega. The weakest program here, is Clustal W (Larkin et al, 2007) followed by PRANK (Löytynoja and Goldman, 2008). PRANK is not designed for aligning distantly related sequences but at giving good alignments for phylogenetic work with special attention to gaps. These gap positions are not included in these tests as they tend not to be structurally conserved. Dialign (Morgenstern et al, 1998) does not use consistency or progressive alignment but is based on finding best local multiple alignments. FSA (Bradley et al, 2009) uses sampling of pairwise alignments and ‘sequence annealing' and has been shown to deliver good nucleotide sequence alignments in the past. The Prefab benchmark test results are shown in Table II. Here, the results are divided into five groups according to the percent identity of the sequences. The overall scores range from 53 to 73% of columns correct. The consistency-based programs MSAprobs, MAFFT L-INS-i, Probalign, Probcons and T-Coffee, are again the most accurate but with long run times. Clustal Omega is close to the consistency programs in accuracy but is much faster. There is then a gap to the faster progressive based programs of MUSCLE, MAFFT, Kalign (Lassmann and Sonnhammer, 2005) and Clustal W. Results from testing large alignments with up to 50 000 sequences are given in Table III using HomFam. Here, each alignment is made up of a core of a Homstrad (Mizuguchi et al, 1998) structure-based alignment of at least five sequences. These sequences are then inserted into a test set of sequences from the corresponding, homologous, Pfam domain. This gives very large sets of sequences to be aligned but the testing is only carried out on the sequences with known structures. Only some programs are able to deliver alignments at all, with data sets of this size. We restricted the comparisons to Clustal Omega, MAFFT, MUSCLE and Kalign. MAFFT with default settings, has a limit of 20 000 sequences and we only use MAFFT with --parttree for the last section of Table III. MUSCLE becomes increasingly slow when you get over 3000 sequences. Therefore, for >3000 sequences we used MUSCLE with the faster but less accurate setting of –maxiters 2, which restricts the number of iterations to two. Overall, Clustal Omega is easily the most accurate program in Table III. The run times show MAFFT default and Kalign to be exceptionally fast on the smaller test cases and MAFFT --parttree to be very fast on the biggest families. Clustal Omega does scale well, however, with increasing numbers of sequences. This scaling is described in more detail in the Supplementary Information. We do have two further test cases with >50 000 sequences, but it was not possible to get results for these from MUSCLE or Kalign. These are described in the Supplementary Information as well. Table III gives overall run times for the four programs evaluated with HomFam. Figure 1 resolves these run times case by case. Kalign is very fast for small families but does not scale as well. Overall, MAFFT is faster than the other programs over all test case sizes but Clustal Omega scales similarly. Points in Figure 1 represent different families with different average sequence lengths and pairwise identities. Therefore, the scalability trend is fuzzy, with larger dots occurring generally above smaller dots. Supplementary Figure S3 shows scalability data, where subsets of increasing size are sampled from one large family only. This reduces variability in pairwise identity and sequence length. External profile alignment Clustal Omega can read extra information from a profile HMM derived from preexisting alignments. For example, if a user wishes to align a set of globin sequences and has an existing globin alignment, this alignment can be converted to a profile HMM and used as well as the sequence input file. This HMM is here referred to as an ‘external profile' and its use in this way as ‘external profile alignment' (EPA). During EPA, each sequence in the input set is aligned to the external profile. Pseudocount information from the external profile is then transferred, position by position, to the input sequence. Ideally, this would be used with large curated alignments of particular proteins or domains of interest such as are used in metagenomics projects. Rather than taking the input sequences and aligning them from scratch, every time new sequences are found, the alignment should be carefully maintained and used as an external profile for EPA. Clustal Omega also can align sequences to existing alignments using conventional alignment methods. Users can add sequences to an alignment, one by one or align a set of aligned sequences to the alignment. In this paper, we demonstrate the EPA approach with two examples. First, we take the 94 HomFam test cases from the previous section and use the corresponding Pfam HMM for EPA. Before EPA, the average accuracy for the test cases was 0.627 of correctly aligned Homstrad positions but after EPA it rises to 0.653. This is plotted, test case for test case in Figure 2A. Each dot is one test case with the TC score for Clustal Omega plotted against the score using EPA. The second example is illustrated in Figure 2B. Here, we take all the BAliBASE reference sets and align them as normal using Clustal Omega and obtain the benchmark result of 0.554 of columns correctly aligned, as already reported in Table I. For EPA, we use the benchmark reference alignments themselves as external profiles. The results now jump to 0.857 of columns correct. This is a jump of over 30% and while it is not a valid measure of Clustal Omega accuracy for comparison with other programs, it does illustrate the potential power of EPA to use information in external alignments. Iteration EPA can also be used in a simple iteration scheme. Once a MSA has been made from a set of input sequences, it can be converted into a HMM and used for EPA to help realign the input sequences. This can also be combined with a full recalculation of the guide tree. In Figure 3, we show the results of one and two iterations on every test case from HomFam. The graph is plotted as a running average TC score for all test cases with N or fewer test cases where N is plotted on the horizontal axis using a log scale. With some smaller test cases, iteration actually has a detrimental effect. Once you get near 1000 or more sequences, however, a clear trend emerges. The more sequences you have, the more beneficial the effect of iteration is. With bigger test cases, it becomes more and more beneficial to apply two iterations. This result confirms the usefulness of EPA as a general strategy. It also confirms the difficulty in aligning extremely large numbers of sequences but gives one partial solution. It also gives a very simple but effective iteration scheme, not just for guide tree iteration, as used in many packages, but for iteration of the alignment itself. Discussion The main breakthroughs since the mid 1980s in MSA methods have been progressive alignment and the use of consistency. Otherwise, most recent work has concerned refinements for speed or accuracy on benchmark test sets. The speed increases have been dramatic but, with just two major exceptions, the methods are still basically O(N 2) and incapable of being extended to data sets of >10 000 sequences. The two exceptions are mBed, used here, and MAFFT PartTree. PartTree is faster but at the expense of accuracy, at least as judged by the benchmarking here. The second group of recent developments have concerned accuracy. This has tended to focus on results from benchmarking, a potentially contentious issue (Aniba et al, 2010; Edgar, 2010). The benchmark test sets that we have are limited in scope and heavily biased toward single domain globular proteins. This has the potential to lead to methods that behave well on benchmarks but which are not so flexible or useful in real-world situations. One development to improve accuracy has been the recruitment of extra homologs to bulk up input data sets. This seems to work well with the consistency-based methods and for small data sets. It appears, however, that there is a limit to the extra accuracy that can be obtained this way, without further development. The extra sequences may also bring in noise and dramatically increase the complexity of the computational problem. This can be partly fixed by iteration but, EPA to a high-quality reference alignment might be a better solution. This also raises the need for methods to visualize such large alignments, in order to detect problems. A second major focus for development has been the use of external information such as RNA structure (Wilm et al, 2008) or protein structure predictions (Pirovano et al, 2008). EPA is a new approach that allows users to exploit information in their own or in publicly available alignments. It does not force new sequences to follow the older alignment exactly. The new sequences get aligned to each other using progressive alignment but the information in the external profile can help provide information as to which amino acids are most likely to occur at each position in a sequence. Most methods attempt to predict this from general models of protein evolution with secondary structure prediction as a refinement. In this paper, we have shown that even using the mass produced alignments from Pfam as external profiles provides a small increase in accuracy for a large general set of test cases. This opens up a new set of possibilities for users to make use of the information contained in large, publicly available alignments and creates an incentive for database providers to make very high-quality alignments available. One of the reasons for the great success of Clustal X was the very user-friendly graphical user interface (GUI). This, however, is not as critical as in the past due to the widespread availability of web-based services where the GUI is provided by the web-based front-end server. Further, there are several very high-quality alignment viewers and editors such as Jalview (Clamp et al, 2004) and Seaview (Gouy et al, 2010) that read Clustal Omega output or which can call Clustal Omega directly. Materials and methods Clustal Omega is licensed under the GNU Lesser General Public License. Source code as well as precompiled binaries for Linux, FreeBSD, Windows and Mac (Intel and PowerPC) are available at http://www.clustal.org. Clustal Omega is available as a command line program only, which uses GNU-style command line options, and also accepts ClustalW-style command options for backwards compatibility and easy integration into existing pipelines. Clustal Omega is written in C and C++ and makes use of a number of excellent free software packages. We used a modified version of Sean Eddy's Squid library (http://selab.janelia.org/software.html) for sequence I/O, allowing the use of a wide variety of file formats. We use David Arthur's k-means++ code (Arthur and Vassilvitskii, 2007) for fast clustering of sequence vectors. Code for fast UPGMA and guide tree handling routines was adopted from MUSCLE (Edgar, 2004). We use the OpenMP library to enable multithreaded computation of pairwise distances and alignment match states. The documentation for Clustal Omega's API is part of the source code, and in addition is available from http://www.clustal.org/omega/clustalo-api/. Full details of all algorithms are given in the accompanying Supplementary Information. The benchmarks that were used were BAliBASE 3 (Thompson et al, 2005), PREFAB 4.0 (posted March 2005) (Edgar, 2010) and a newly constructed data set (HomFam) using sequences from Pfam (version 25) and Homstrad (as of 2011-06-13) (Mizuguchi et al, 1998). The programs that were compared can be obtained from: ClustalW2, v2.1 (http://www.clustal.org) DIALIGN 2.2.1 (http://dialign.gobics.de/) FSA 1.15.5 (http://sourceforge.net/projects/fsa/) Kalign 2.04 (http://msa.sbc.su.se/cgi-bin/msa.cgi) MAFFT 6.857 (http://mafft.cbrc.jp/alignment/software/source.html) MSAProbs 0.9.4 (http://sourceforge.net/projects/msaprobs/files/) MUSCLE version 3.8.31 posted 1 May 2010 (http://www.drive5.com/muscle/downloads.htm) PRANK v.100802, 2 August 2010 (http://www.ebi.ac.uk/goldman-srv/prank/src/prank/) Probalign v1.4 (http://cs.njit.edu/usman/probalign/) PROBCONS version 1.12 (http://probcons.stanford.edu/download.html) T-Coffee Version 8.99 (http://www.tcoffee.org/Projects_home_page/t_coffee_home_page.html#DOWNLOAD). Supplementary Material Supplementary Information Supplementary Figures S1–3 Review Process File
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              mapDamage2.0: fast approximate Bayesian estimates of ancient DNA damage parameters

              Motivation: Ancient DNA (aDNA) molecules in fossilized bones and teeth, coprolites, sediments, mummified specimens and museum collections represent fantastic sources of information for evolutionary biologists, revealing the agents of past epidemics and the dynamics of past populations. However, the analysis of aDNA generally faces two major issues. Firstly, sequences consist of a mixture of endogenous and various exogenous backgrounds, mostly microbial. Secondly, high nucleotide misincorporation rates can be observed as a result of severe post-mortem DNA damage. Such misincorporation patterns are instrumental to authenticate ancient sequences versus modern contaminants. We recently developed the user-friendly mapDamage package that identifies such patterns from next-generation sequencing (NGS) sequence datasets. The absence of formal statistical modeling of the DNA damage process, however, precluded rigorous quantitative comparisons across samples. Results: Here, we describe mapDamage 2.0 that extends the original features of mapDamage by incorporating a statistical model of DNA damage. Assuming that damage events depend only on sequencing position and post-mortem deamination, our Bayesian statistical framework provides estimates of four key features of aDNA molecules: the average length of overhangs (λ), nick frequency (ν) and cytosine deamination rates in both double-stranded regions ( ) and overhangs ( ). Our model enables rescaling base quality scores according to their probability of being damaged. mapDamage 2.0 handles NGS datasets with ease and is compatible with a wide range of DNA library protocols. Availability: mapDamage 2.0 is available at ginolhac.github.io/mapDamage/ as a Python package and documentation is maintained at the Centre for GeoGenetics Web site (geogenetics.ku.dk/publications/mapdamage2.0/). Contact: jonsson.hakon@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online.
                Bookmark

                Author and article information

                Contributors
                plnagy@praxisgenomics.com
                Journal
                Eur J Hum Genet
                Eur J Hum Genet
                European Journal of Human Genetics
                Springer International Publishing (Cham )
                1018-4813
                1476-5438
                7 July 2020
                7 July 2020
                January 2021
                : 29
                : 1
                : 164-172
                Affiliations
                [1 ]GRID grid.21729.3f, ISNI 0000000419368729, Department of Pathology, Laboratory of Personalized Genomic Medicine, , Columbia University, ; New York, NY USA
                [2 ]GRID grid.419617.c, ISNI 0000 0001 0667 8064, National Institute of Oncology, ; Budapest, Hungary
                [3 ]Department of Archaeogenetics, Institute of Hungarian Research, Budapest, Hungary
                [4 ]GRID grid.9008.1, ISNI 0000 0001 1016 9625, Department of Genetics, , University of Szeged, ; Szeged, Hungary
                [5 ]MNG Laboratories LLC, Atlanta, GA USA
                [6 ]GRID grid.5399.6, ISNI 0000 0001 2176 4817, Aix Marseille Université, CNRS, EFS, ADES, “Biologie des Groupes Sanguins”, ; Marseille, France
                [7 ]Gene by Gene, Houston, TX USA
                [8 ]GRID grid.77269.3d, ISNI 0000 0001 1015 7624, Department of Genetics and Fundamental Medicine, , Bashkir State University, ; Ufa, Russia
                [9 ]Institute of Biochemistry and Genetics - Subdivision of the Ufa Federal Research Centre of Russian Academy of Sciences, Ufa, Russia
                [10 ]GRID grid.9008.1, ISNI 0000 0001 1016 9625, Department of Pediatrics and Pediatric Health Center, , University of Szeged, ; Szeged, Hungary
                [11 ]King St. Stephen Museum, Székesfehérvár, Hungary
                [12 ]Gyula Siklósi Research Centre for Urban History Székesfehérvár, Székesfehérvár, Hungary
                [13 ]Gyula László Department and Archive, Institute of Hungarian Research, Budapest, Hungary
                [14 ]GRID grid.418664.9, ISNI 0000 0004 0586 9514, Institute of Forensic Medicine, Clinical Center of Vojvodina, ; Novi Sad, Serbia
                [15 ]GRID grid.10822.39, ISNI 0000 0001 2149 743X, Faculty of Medicine, , University of Novi Sad, ; Novi Sad, Serbia
                [16 ]Estonian Biocentre, Institute of Genomics, University of Tartu, Tartu, Estonia
                [17 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Genetics, , Stanford University, ; Stanford, CA USA
                [18 ]Present Address: Praxis Genomics LLC, Atlanta, GA USA
                [19 ]GRID grid.2515.3, ISNI 0000 0004 0378 8438, Present Address: Boston’s Children’s Hospital, ; Boston, MA USA
                Author information
                http://orcid.org/0000-0002-7461-8415
                http://orcid.org/0000-0002-6740-7484
                http://orcid.org/0000-0003-3466-0368
                http://orcid.org/0000-0002-1032-0177
                http://orcid.org/0000-0002-0940-6627
                http://orcid.org/0000-0001-7867-6455
                http://orcid.org/0000-0002-0515-117X
                http://orcid.org/0000-0002-5999-149X
                http://orcid.org/0000-0003-2987-3334
                Article
                683
                10.1038/s41431-020-0683-z
                7809292
                32636469
                35153f8f-5dd4-4b4a-8982-558b57f4b6d9
                © The Author(s) 2020

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 5 November 2019
                : 16 June 2020
                : 25 June 2020
                Funding
                Funded by: FundRef https://doi.org/10.13039/501100002261, Russian Foundation for Basic Research (RFBR);
                Award ID: 17-44-020748
                Award ID: 19-04-01195
                Award ID: 19-04-01195
                Award ID: 17-44-020748
                Award Recipient :
                Funded by: Ministry of Science and Higher Education of Russian Federation FZWU-2020-0027
                Funded by: FundRef https://doi.org/10.13039/501100002301, Eesti Teadusagentuur (Estonian Research Council);
                Award ID: IUT-24
                Award Recipient :
                Categories
                Article
                Custom metadata
                © European Society of Human Genetics 2021

                Genetics
                genome evolution,data mining
                Genetics
                genome evolution, data mining

                Comments

                Comment on this article