26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      KAMO: towards automated data processing for microcrystals

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          An automated data-processing pipeline for protein microcrystals is presented. The processing of multiple small-wedge data sets was made dramatically easier by this pipeline.

          Abstract

          In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5–10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: found
          • Article: not found

          An introduction to data reduction: space-group determination, scaling and intensity statistics

          1. Introduction Estimates of integrated intensities from X-ray diffraction images are not generally suitable for immediate use in structure determination. Theoretically, the measured intensity I h of a reflection h is proportional to the square of the underlying structure factor |F h |2, which is the quantity that we want, with an associated measurement error, but systematic effects of the diffraction experiment break this proportionality. Such systematic effects include changes in the beam intensity, changes in the exposed volume of the crystal, radiation damage, bad areas of the detector and physical obstruction of the detector (e.g. by the backstop or cryostream). If data from different crystals (or different sweeps of the same crystal) are being merged, corrections must also be applied for changes in exposure time and rotation rate. In order to infer |F h |2 from I h , we need to put the measured intensities on the same scale by modelling the experiment and inverting its effects. This is generally performed in a scaling process that makes the data internally consistent by adjusting the scaling model to minimize the difference between symmetry-related observations. This process requires us to know the point-group symmetry of the diffraction pattern, so we need to determine this symmetry prior to scaling. The scaling process produces an estimate of the intensity of each unique reflection by averaging over all of the corrected intensities, together with an estimate of its error σ(I h ). The final stage in data reduction is estimation of the structure amplitude |F h | from the intensity, which is approximately I h 1/2 (but with a skewing factor for intensities that are below or close to background noise, e.g. ‘negative’ intensities); at the same time, the intensity statistics can be examined to detect pathologies such as twinning. This paper presents a brief overview of how to run CCP4 programs for data reduction through the CCP4 graphical interface ccp4i and points out some issues that need to be considered. No attempt is made to be comprehensive nor to provide full references for everything. Automated pipelines such as xia2 (Winter, 2010 ▶) are often useful and generally work well, but sometimes in difficult cases finer control is needed. In the current version of ccp4i (CCP4 release 6.1.3) the ‘Data Reduction’ module contains two major relevant tasks: ‘Find or Match Laue Group’, which determines the crystal symmetry, and ‘Scale and Merge Intensities’, which outputs a file containing averaged structure amplitudes. Future GUI versions may combine these steps into a simplified interface. Much of the advice given here is also present in the CCP4 wiki (http://www.ccp4wiki.org/). 2. Space-group determination The true space group is only a hypo­thesis until the structure has been solved, since it can be hard to distinguish between exact crystallographic symmetry and approximate noncrystallographic symmetry. However, it is useful to find the likely symmetry early on in the structure-determination pipeline, since it is required for scaling and indeed may affect the data-collection strategy. The program POINTLESS (Evans, 2006 ▶) examines the symmetry of the diffraction pattern and scores the possible crystallographic symmetry. Indexing in the integration program (e.g. MOSFLM) only indicates the lattice symmetry, i.e. the geometry of the lattice giving constraints on the cell dimensions (e.g. α = β = γ = 90° for an orthorhombic lattice), but such relationships can arise accidentally and may not reflect the true symmetry. For example, a primitive hexagonal lattice may belong to point groups 3, 321, 312, 6, 622 or indeed lower symmetry (C222, 2 or 1). A rotational axis of symmetry produces identical true intensities for reflections related by that axis, so examination of the observed symmetry in the diffraction pattern allows us to determine the likely point group and hence the Laue group (a point group with added Friedel symmetry) and the Patterson group (with any lattice centring): note that the Patterson group is labelled ‘Laue group’ in the output from POINTLESS. Translational symmetry operators that define the space group (e.g. the distinction between a pure dyad and a screw dyad) are only visible in the observed diffraction pattern as systematic absences, along the principal axes for screws, and these are less reliable indicators since there are relatively few axial reflections in a full three-dimensional data set and some of these may be unrecorded. The protocol for determination of space group in POINTLESS is as follows. (i) From the unit-cell dimensions and lattice centring, find the highest compatible lattice symmetry within some tolerance, ignoring any input symmetry information. (ii) Score each potential rotational symmetry element belonging to the lattice symmetry using all pairs of observations related by that element. (iii) Score combinations of symmetry elements for all possible subgroups of the lattice-symmetry group (Laue or Patterson groups). (iv) Score possible space groups from axial systematic absences (the space group is not needed for scaling but is required later for structure solution). (v) Scores for rotational symmetry operations are based on correlation coefficients rather than R factors, since they are less dependent on the unknown scales. A probability is estimated from the correlation coefficient, using equivalent-size samples of unrelated observations to estimate the width of the probability distribution (see Appendix A ). 2.1. A simple example POINTLESS may be run from the ‘Data Reduction’ module of ccp4i with the task ‘Find or Match Laue Group’ or from the ‘QuickSymm’ option of the iMOSFLM interface (Battye et al., 2011 ▶). Unless the space group is known from previous crystals, the appropriate major option is ‘Determine Laue group’. To use this, fill in the boxes for the title, the input and output file names and the project, crystal and data-set names (if not already set in MOSFLM). Table 1 ▶ shows the results for a straightforward example in space group P212121. Table 1 ▶(a) shows the scores for the three possible dyad axes in the orthorhombic lattice, all of which are clearly present. Combining these (Table 1 ▶ b) shows that the Laue group is mmm with a primitive lattice, Patterson group Pmmm. Fourier analysis of systematic absences along the three principal axes shows that all three have alternating strong (even) and weak (odd) intensities (Fig. 1 ▶ and Table 1 ▶ c), so are likely to be screw axes, implying that the space group is P212121. However, there are only three h00 reflections recorded along the a* axis, so confidence in the space-group assignment is not as high as the confidence in the Laue-group assignment (Table 1 ▶ d). With so few observations along this axis, it is impossible to be confident that P212121 is the true space group rather than P22121. 2.2. A pseudo-cubic example Table 2 ▶ shows the scores for individual symmetry elements for a pseudo-cubic case with a ≃ b ≃ c. It is clear that only the orthorhombic symmetry elements are present: these are the high-scoring elements marked ‘***’. Neither the fourfolds characteristic of tetragonal groups nor the body-diagonal threefolds (along 111 etc.) characteristic of cubic groups are present. The joint probability score for the Laue group Pmmm is 0.989. The suggested solution (not shown) interchanges k and l to make a 1 if the anomalous differences are on average greater than their error. Another way of detecting a significant anomalous signal is to compare the two estimates of ΔI anom from random half data sets, ΔI 1 and ΔI 2 (provided there are at least two measurements of each, i.e. a multiplicity of roughly 4). Figs. 5 ▶(b) and 5 ▶(f) show the correlation coefficient between ΔI 1 and ΔI 2 as a function of resolution: Fig. 5 ▶(f) shows little statistically significance beyond about 4.5 Å resolution. Figs. 5 ▶(c) and 5 ▶(g) show scatter plots of ΔI 1 against ΔI 2: this plot is elongated along the diagonal if there is a large anomalous signal and this can be quantitated as the ‘r.m.s. correlation ratio’, which is defined as (root-mean-square deviation along the diagonal)/(root-mean-square deviation perpendicular to the diagonal) and is shown as a function of resolution in Figs. 5 ▶(d) and 5 ▶(h). The plots against resolution give a suggestion of where the data might be cut for substructure determination, but it is important to note that useful albeit weak phase information extends well beyond the point at which these statistics show a significant signal. 5. Estimation of amplitude |F| from intensity I If we knew the true intensity J we could just take the square root, |F| = J 1/2. However, measured intensities have an error, so a weak intensity may well be measured as negative (i.e. below background); indeed, multiple measurements of a true intensity of zero should be equally positive and negative. This is one reason why when possible it is better to use I rather than |F| in structure determination and refinement. The ‘best’ (most likely) estimate of |F| is larger than I 1/2 for weak intensities, since we know |F| > 0, but |F| = I 1/2 is a good estimate for stronger intensities, roughly those with I > 3σ(I). The programs TRUNCATE and its newer version CTRUNCATE estimate |F| from I and σ(I) as where the prior probability of the true intensity p(J) is estimated from the average intensity in the same resolution range (French & Wilson, 1978 ▶). 6. Intensity statistics and crystal pathologies At the end stage of data reduction, after scaling and merging, the distribution of intensities and its variation with resolution can indicate problems with the data, notably twinning (see, for example, Lebedev et al., 2006 ▶; Zwart et al., 2008 ▶). The simplest expected intensity statistics as a function of resolution s = sinθ/λ arise from assuming that atoms are randomly placed in the unit cell, in which case 〈I〉(s) = 〈FF*〉(s) = g(j, s)2, where g(j, s) is the scattering from the jth atom at resolution s. This average intensity falls off with resolution mainly because of atomic motions (B factors). If all atoms were equal and had equal B factors, then 〈I〉(s) = Cexp(−2Bs 2) and the ‘Wilson plot’ of log[〈I〉(s)] against s 2 would be a straight line of slope −2B. The Wilson plot for proteins shows peaks at ∼10 and 4 Å and a dip at ∼6 Å arising from the distribution of inter­atomic spacings in polypeptides (fewer atoms 6 Å apart than 4 Å apart), but the slope at higher resolution does give an indication of the average B factor and an unusual shape can indicate a problem (e.g. 〈I〉 increasing at the outer limit, spuriously large 〈I〉 owing to ice rings etc.). For detection of crystal pathologies we are not so interested in resolution dependence, so we can use normalized intensities Z = I/〈I〉(s) ≃ |E|2 which are independent of resolution and should ideally be corrected for anisotropy (as is performed in CTRUNCATE). Two useful statistics on Z are plotted by CTRUNCATE: the moments of Z as a function of resolution and its cumulative distribution. While 〈Z〉(s) = 1.0 by definition, its second moment 〈Z 2〉(s) (equivalent to the fourth moment of E) is >1.0 and is larger if the distribution of Z is wider. The ideal value of 〈E 4〉 is 2.0, but it will be smaller for the narrower intensity distribution from a merohedral twin (too few weak reflections), equal to 1.5 for a perfect twin and larger if there are too many weak reflections, e.g. from a noncrystallographic translation which leads to a whole class of reflections being weak. The cumulative distribution plot of N(z), the fraction of reflections with Z |L| and N(|L|) = |L|(3 − L 2)/2 for a perfect twin. This test seems to be largely unaffected by anisotropy or translational non­crystallographic symmetry which may affect tests on Z. The calculation of Z = I/〈I〉(s) depends on using a suitable value for I/〈I〉(s) and noncrystallographic translations or uncorrected anisotropy lead to the use of an inappropriate value for 〈I〉(s). These statistical tests are all unweighted, so it may be better to exclude weak high-resolution data or to examine the resolution dependence of, for example, the moments of Z (or possibly L). It is also worth noting that fewer weak reflections than expected may arise from unresolved closely spaced spots along a long real-space axis, so that weak reflections are contaminated by neighbouring strong reflections, thus mimicking the effect of twinning. 7. Summary: questions and decisions In the process of data reduction, a number of decisions need to be taken either by the programs or by the user. The main questions and con­siderations are as follows. (i) What is the point group or Laue group? This is usually unambiguous, but pseudosymmetry may confuse the programs and the user. Close examination of the scores for individual symmetry elements from POINTLESS may suggest lower symmetry groups to try. (ii) What is the space group? Distinction between screw axes and pure rotations from axial systematic absences is often unreliable and it is generally a good idea to try all the likely space groups (consistent with the Laue group) in the key structure-solution step: either molecular-replacement searches or substructure searches in experimental phasing. For example, in a primitive orthorhombic system the eight possible groups P2 x 2 x 2 x should be tried. This has the added advantage of providing some negative controls on the success of the structure solution. (iii) Is there radiation damage: should data collected after the crystal has had a high dose of radiation be ignored (possibly at the expense of resolution)? Cutting back data from the end may reduce completeness and the optimum trade-off is hard to choose. (iv) What is the best resolution cutoff? An appropriate choice of resolution cutoff is difficult and sometimes seems to be performed mainly to satisfy referees. On the one hand, cutting back too far risks excluding data that do contain some useful information. On the other hand, extending the resolution further makes all statistics look worse and may in the end degrade maps. The choice is perhaps not as important as is sometimes thought: maps calculated with slightly different resolution cutoffs are almost indistinguishable. (v) Is there an anomalous signal detectable in the intensity statistics? Note that a weak anomalous signal may still be useful even if it is not detectable in the statistics. The statistics do give a good guide to a suitable resolution limit for location of the substructure, but the whole resolution range should be used in phasing. (vi) Are the data twinned? Highly twinned data sets can be solved by molecular replacement and refined, but probably not solved, by experimental phasing methods. Partially twinned data sets can often be solved by ignoring the twinning and then refined as a twin. (vii) Is this data set better or worse than those previously collected? One of the best things to do with a bad data set is to throw it away in favour of a better one. With modern synchrotrons, data collection is so fast that we usually have the freedom to collect data from several equivalent crystals and choose the best. In most cases the data-reduction process is straightforward, but in difficult cases critical examination of the results may make the difference between solving and not solving the structure.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

            1. Introduction   The formation of good-sized and well diffracting crystals from large and/or insoluble proteins remains a significant challenge in macromolecular crystallography (MX). Progress has certainly been made in recent times (Fry et al., 1999 ▶; Caffrey, 2003 ▶; Carpenter et al., 2008 ▶), but obtaining good-quality crystals of viruses, large molecular complexes or membrane proteins is still challenging today. Even in cases in which small crystals are obtained, very few data can be collected before they receive a damaging radiation dose. Consequently, crystallographers are often unable to collect complete data sets from small crystals of a few micrometres in size (Holton & Frankel, 2010 ▶). A possible solution to these limitations is to merge data sets from many crystals into a single data set. This practice is not new, since for isomorphous replacement or multi-wavelength anomalous diffraction data necessarily have to be combined from multiple sets. This has also been a standard approach in virus crystallography, where room-temperature data collection from highly sensitive samples necessarily implies the merging of data from many crystals (Wang et al., 2012 ▶). The need to deal with multiple data sets is nowadays becoming more and more common, mainly because of greater accessibility to microfocus beamlines at synchrotrons around the world. A micrometre-sized beam can be effectively used for data collection from difficult crystals in a number of ways: (i) a large crystal can be hit by the microbeam at different locations during rotation, thereby continually introducing unexposed volumes of the crystal to X-rays and reducing the impact of radiation damage on the resulting data; (ii) small crystals that diffract very weakly on standard beamlines can instead produce respectable diffraction patterns using intense microbeams; and (iii) small-sized crystals can produce weak but sufficiently interpretable diffraction patterns when they are exposed directly inside crystallization plates (Watanabe et al., 2002 ▶; Ng et al., 2008 ▶; le Maire et al., 2011 ▶; Soliman et al., 2011 ▶; Axford et al., 2012 ▶). The obvious advantage of this in situ technique is that the whole time-consuming procedures behind loop fishing and crystal cooling can be avoided, thus making the method fast and amenable to automation. If the number of recorded spots and their signal-to-noise ratio are sufficiently high to allow indexing, integration does not usually pose any significant problem for data sets yielded by multiple crystals. However, the merging of such potentially diverse and very partial data sets can be a challenge. Isomorphous crystals will give rise to data whose merging exhibits little error. In contrast, merging data from non-isomorphous crystals yields data sets with bad statistics, and if the degree of non-isomorphism is very high, merging may even fail. Therefore, one approach to building a complete data set from partial data sets from multiple crystals is to select and group together those crystals that have a higher likelihood of merging well together. Besides the obvious advantage of providing complete native data sets, data collection from multiple crystals has also shown potential to reduce systematic errors and to strengthen the anomalous signal in the SAD and MAD phasing techniques (Liu et al., 2011 ▶). A remarkable tour de force in data integration and scaling from multiple microcrystals was recently performed by Hanson et al. (2012 ▶). The method adopted by Hanson et al. (2012 ▶) is applicable to cases in which the data are strongly affected by radiation damage and the only way to obtain a fairly complete data set to a reasonable resolution is through data collection from very many crystals. The procedure consists of several steps and makes use of recycling between integrated and scaled data, with the progressive inclusion of individual observations, based on the value of the corresponding merging statistics. Owing to the need for frequent human intervention to judge the removal or acceptance of groups of observations, this approach is more akin to a protocol than to an automated or semi-automated procedure. This paper describes a method to assist in the selection of optimal groups of data sets from multiple crystals prior to scaling and merging. Cluster analysis forms the central part of this approach and is implemented in a computer program called BLEND. The following sections introduce the statistical descriptors used to characterize data sets and the clustering method adopted, their implementation in BLEND and several practical examples. 2. Theory and methods   BLEND carries out an analysis of multiple data sets at the pre-scaling stage, working exclusively with integrated but unscaled and unmerged intensities. The main objective of the procedure is to partition multiple data sets into homogeneous groups whose merging statistics have a tendency to show optimal values. A visual rendering of the procedure is depicted in Fig. 1 ▶, where partitioning and selection are explained using a simulated example with 11 data sets. Multivariate statistics offers several techniques to accomplish such partitioning. Cluster analysis is among the most frequently used, especially at an exploratory stage where not much is known about the probability distribution underlying the objects being investigated. The scenario faced by the researcher merging multiple integrated data sets is very much the typical scenario found in exploratory data analysis, where groups are contemplated in an otherwise unshaped set. A good and thorough introduction to cluster analysis is given by Everitt et al. (2011 ▶). 2.1. Main elements of the clustering procedure   In our procedure for cluster analysis, each wedge (a contiguous series of diffraction images) of data is treated as a geometric point in multidimensional space. Points which are in close proximity are deemed to belong to the same group. Grouping is performed in a hierarchical way, in which small nuclei of close points are progressively enlarged to include more and more elements until the whole set of data wedges is engulfed in a final all-inclusive cluster. The procedure and the main concepts are now introduced. 2.1.1. Statistical description of data sets and clusters   Individual data objects are characterized through statistical descriptors. These are numerical quantities associated with selected features of each object, whose aim is to describe quantitatively specific parts of data variability. If m statistical descriptors are associated with each object, then all data will be represented as points in an m-dimensional orthonormal space. Distances between all pairs of points in m-dimensional space are computed and the two closest objects are merged to form the first cluster. Although several definitions of distance are possible in cluster analysis, Cartesian distance, as adopted here, is appropriate and most common when the underlying statistical descriptors show continuous variability across real numbers. 2.1.2. Cluster-linkage method   After the formation of the first cluster of two elements, the next two closest points are merged to form the second cluster. In measuring all distances during this second pass one includes the possibility of measuring the distance between an individual point and a cluster. In BLEND the proximity of a point to a cluster (or between two clusters) is defined as the distance between the point and the cluster centroid(s). The annexation of points or clusters to form larger clusters is known as the linkage method. None of the existing linkage methods are known to perform universally well and the appropriate method is selected according to the specific application. In the context of obtaining good merged data sets from multiple data wedges it is desirable to form groups with features that are not too dissimilar among elements within a cluster. For this reason it is advisable to use linkage methods that use centroids. Among these methods, algorithms to implement centroid and Ward linkage are very popular and are readily available. In the centroid linkage the proximity between two clusters is defined as the distance between the centroids of each cluster. In the Ward linkage the proximity is given by the increase in distance variance obtained when the two separate clusters are joined into a single cluster (see, for instance, p. 77 of Everitt et al., 2011 ▶). Both types of linkage method tend to produce compact clusters, but the centroid method is subject to the undesirable possibility of inversion. This means that two clusters merging at a given step can happen to be more similar than clusters that merged in a previous step (see, for instance, Tan et al., 2006 ▶). In the framework that we have developed, in which data sets merge in order of decreasing proximities, this is an unwanted feature. For this reason, and on the basis of empirical tests, we have decided to work with the Ward linkage. 2.1.3. Dendrograms   The overall picture of hierarchical clustering can be effectively illustrated by the dendrogram, a kind of inverted tree whose leaves symbolize individual data sets and whose branches progressively merge into each other and eventually into an all-encompassing trunk. A numeric vertical scale, or cluster height, is shown at the side of the tree to give a quantitative measure of the distances at which smaller branches merge into larger branches. Through the selection of one or two values on this vertical scale, a certain number of clusters can be highlighted. In Fig. 1 ▶ the whole process is illustrated with a simulated example involving 11 data sets and two statistical descriptors. A review of hierarchical clustering is given by Jain et al. (1999 ▶). 2.2. Statistical descriptors   The effectiveness of cluster analysis relies to a great extent on how well the statistical descriptors chosen for data characterization are able to capture their essential features. Differences in the numerical values of such quantities should correlate to physical differences among the associated data sets and structures. It is not clear how to select descriptors when dealing with data immediately after integration, as these have not yet been scaled to produce structure factors. Furthermore, issues concerning indexing and choice of the correct space group are still present at the pre-scaling stage. If integrated intensities are selected to build descriptors, then their averages in resolution shells, rather than individual values, will have to be used. Other quantities available after data integration are unit-cell parameters and mosaicity. The last can be a measure of crystal packing, but it is very often plagued by correlations with other experimental quantities during post-refinement. Unit-cell parameters can also be affected by such correlations, but to a lesser extent. Indeed, we have found mosaicity not to be an effective and reliable descriptor, and have not used it. On the other hand, unit-cell parameters have shown a remarkable ability to signal similarities and differences among data sets. For this reason, unit-cell parameters are currently the only descriptors used in BLEND. If no significant variation in unit-cell parameters occurs across multiple data sets, then it can be expected that each parameter will be distributed around a single average. When different crystals (or different data sets from one crystal) exhibit non-isomorphism, the population of unit cells can break up into groups with unit-cell parameters distributed around two or more averages. This can be induced by a change in molecular packing, by a genuine variation in parts of one or more chains composing the structure, by an alteration in the amount of bulk solvent or by a combination of these factors. Indeed, changes in unit-cell parameters caused by structural variations were observed during the first crystallographic studies of macromolecules (Perutz, 1946 ▶), as the isomorphous replacement method was then the only available technique to solve protein structures. More recently, dehydration techniques have enabled researchers to explore crystal packing and structural change in a systematic way (Pickford et al., 1993 ▶; Esnouf et al., 1998 ▶; Bowler et al., 2006 ▶; Russi et al., 2011 ▶). It is important to stress that in order to be effective in the determination of clusters of data sets, unit-cell parameters have to be filtered through principal component analysis and statistical standardization (where data are rescaled so as to have a mean equal to 0 and a standard deviation equal to 1) prior to cluster analysis. This has the beneficial effect of removing undesired correlations among descriptors and of placing all of them on the same numerical scale. The kind of standardization just described allows data to be prepared for cluster analysis in an appropriate way, because all descriptors are given equal numerical weight. However, this operation also has the undesirable effect of removing the ability to visually detect non-isomorphism between different groups of data sets. The cluster height appearing in the dendrograms presented in this work has no immediate connection with the variation of unit-cell parameters in absolute terms. Thus, for instance, and as will be described later in the text (see §§3.3 and 3.4), the nine data sets of the ultralente structure can be divided into two groups which present a good degree of non-isomorphism, while the 11 data sets for in situ lysozyme do not show any major non-isomorphism. Still, both dendrograms display a clear separation into two distinct groups, and the cluster height does not make it clear that this separation reflects isomorphism issues. It is therefore useful to use some quantitative measure of unit-cell parameter variation in addition to cluster analysis in order to assess the real variation of unit cells, as this is not detectable in the dendrogram. Among the many parameters that can be imagined, we have found that a quantity describing a maximum linear variation that also takes into account unit-cell angles is best suited for the characterization. Such a quantity, named linear cell variation (LCV), measures the maximum linear increase or decrease of the diagonals on the three independent cell faces; details are given in Appendix A . LCV values for the five test cases treated in §3 are shown in Table 1 ▶. 2.3. The program structure   BLEND can be executed in two modes: analysis and synthesis. Program execution is regulated by keywords included in an ASCII file. Input data are unmerged reflection files in MTZ or XDS format. The final output is a number of merged reflection files in MTZ format. Initially, BLEND is run in analysis mode (option -a). The input is a list of data sets to be processed or the path to the directory containing the data. The program checks each data set in turn, making sure that it is properly formatted and includes integrated data taken in a continuous sweep. Data sets failing this check are discarded and an updated file list is stored as a new ASCII file. Next, it applies the unit-cell parameter filtering of §2.2 and performs the cluster analysis (§2.1). The output of the analysis mode is a cluster dendrogram and an associated ASCII file providing details of the clusters. Information obtained during execution in analysis mode is also dumped to a binary file, ready to be read by the program when running in synthesis mode. Users should both look at the dendrogram and read through its ASCII counterpart to decide which clusters of data sets are worth further investigation. Once one or more clusters have been singled out, BLEND is run in synthesis mode (option -s). It will suffice to provide one or two numerical levels corresponding to cluster-height values (see §2.1 and Figure 1 ▶). The program will accordingly scale and merge together all data sets at each node included in the specific dendrogram region. Scaling and merging are carried out using the CCP4 programs POINTLESS and AIMLESS (Evans, 2006 ▶). Execution in synthesis mode can be repeated with different numerical level values as many times as required. Specific keywords for either POINTLESS or AIMLESS can be added in BLEND keywords files so that execution in synthesis mode can be customized. BLEND can also be executed in combination mode (option -c) when the user wishes to combine data sets outside an existing cluster. This mode is particularly useful when reference data sets need to be included to stabilize the scaling of many incomplete ones, a practice that is quite often followed by protein crystallographers. Unmerged files produced by POINTLESS containing the most likely space group, or one selected by the user, are also stored, ready to be used for prospective and individual scaling jobs, separately from BLEND. BLEND requires only minimal and simple intervention from the user, but also permits both analysis and synthesis modes to be tailored to specific needs should the user desire. 3. Test cases, results and discussion   Five test cases have been selected to demonstrate the use and usefulness of BLEND. In each test case, the resolution limits were selected to be the same for all clusters. This was performed in a conservative fashion so as to have 〈I/σ(I)〉 ≥ 2 for all individual data sets. Such limits can easily be changed by the user through the keywords file. Details of data collection for all five test cases are included as Supplementary Material1. All data were integrated using either XDS (Kabsch, 1993 ▶) or MOSFLM (Leslie, 1992 ▶). A warning should be issued for dendrogram inspection. As explained in §2.2, cluster analysis has a tendency to create partitions out of any group of objects, even when such partitions do not naturally exist. Therefore, the splitting of dendrograms into branches does not necessarily correspond to major physical differences between data sets. As also explained in §2.2, a complementary overall quantity, the LCV, will give us a quantitative feeling of the degree of isomorphism existing among each group of data sets. LCV values for the five test cases explored here are included in Table 1 ▶. The case showing the highest variability is that of ultralente insulin, while in situ lysozyme is the case with the lowest cell variability. These two cases were selected for thorough structural investigations of non-isomorphism (see §§3.3 and 3.4). A few words on the traditional merging statistics used to measure data quality also need to be included. Until recently, these statistics were drawn from data obtained as single complete sweeps from a single crystal. The appreciation of specific numeric values as indicators of good data quality has evolved during several years of steady, and often nonproblematic, collection from such crystals. With the recent advent of techniques from multiple crystals and with the diffusion of problematic data sets (from membrane proteins, viruses and complexes) this appreciation will have to be widened to allow the inclusion of data that would otherwise be discarded. The determination and numeric values of merging statistics depending on geometric factors, such as completeness or multiplicity, is not affected by the multiple-crystal scenario. Other statistics, however, such as R merge, R meas, R p.i.m. etc. will in general tend to be higher, often assuming values that would be deemed unacceptable if the same data had been collected from a single crystal. Furthermore, the scaling process itself normally finds problems when dealing with multiple data sets, even when non-isomorphism is not an issue. In our experience the estimation of errors for scaled intensity has proved to be particularly problematic, even in those cases where the data sets came from the same crystal. This fact obviously has repercussions on the determination of one of the traditional quality indicators, 〈I/σ(I)〉. In the five test cases reported here we have used AIMLESS for all scaling jobs. The algorithm used in this program for the automatic determination of the SDFAC/SDAAD parameters is not stable when applied to multiple incomplete data sets (Evans, 2013 ▶). An effort was made to manually adjust these parameters as best as possible and the best efforts have been included in all tables presenting merging statistics. While we acknowledge the importance of the estimation of 〈I/σ(I)〉 as an effective way to judge data quality, it is also important to stress that its evaluation has had no influence on the other results and findings presented in this paper. 3.1. Cryocooled thaumatin: space group P41212   14 data sets were collected from 12 crystals (three data sets were collected at different positions from a large crystal) on beamline I02 (using an ADSC Q315r CCD detector) and beamline I24 (using a Dectris PILATUS 6M detector) at Diamond Light Source, UK. Crystal sizes varied from 30 × 30 to 500 × 500 µm. Angular rotation ranges varied from 45° to 360° sweeps. Rotation steps were also very different from data set to data set (0.15° to 0.5° per image). All data sets were truncated to include only the first 40 images for the purposes of testing BLEND in order to avoid unnecessary lengthy runs while testing and to mimic what might be more commonly happening when data sets from smaller multiple crystals are collected. The analysis mode of BLEND on these 14 data sets produced the dendrogram shown in Fig. 2 ▶(a). There are 91 ways to combine two different data sets taken from the 14 available, 364 ways to combine three different data sets, 1001 ways to combine four different data sets and so on. In order to quantify how well different groups of data sets merged together, and in light of the very large number of possible merging combinations, a random selection of 50 combinations from all groups of size 2, 3, 4 or more were merged. Given that there are only 14 ways to form different combinations of 13 data sets and only one combination of 14 data sets, all of these combinations were included in the test. It still took a few hours on a standard laptop PC to run POINTLESS and AIMLESS of all sampled groups (there were (11 × 50) + 14 + 1 = 565 runs of POINTLESS and AIMLESS). The highest resolution chosen for all scaling jobs was 1.95 Å. The results are presented in Fig. 2 ▶(b) and show the spread of R meas values for all combinations. AIMLESS was run using default options only, resulting in 41 failed jobs, so only 524 points appear in the plot. Data overlap between sets and completeness can be low for data sets with small group sizes and would explain why the R meas spread is larger for these cases. At the same time, however, for data sets showing a higher degree of isomorphism the R meas would tend to be lower than for combinations with a higher number of data sets. This has been explained by Diederichs & Karplus (1997 ▶) and Weiss (2001 ▶) as the statistical tendency of R meas to remain stationary or to increase slightly when the number of unmerged reflections is increased. A slight increase generally indicates the addition of data with good isomorphism, while a larger increase is normally taken to indicate the addition of data which are in part non-isomorphous. The best merged data sets as indicated by BLEND, corresponding to the merging nodes of the dendrogram in Fig. 2 ▶(a), are plotted in Fig. 2 ▶(c) together with the median, upper and lower inter-quartile lines of Fig. 2 ▶(b). In this example ten out of 13 of the BLEND data sets lie below the lower interquartile line, illustrating how it is able to select groups of data sets for merging that have a high likelihood of showing optimal statistics. (See Table 2 ▶ for a summary of the merging statistics.) BLEND therefore presents a very quick route to obtaining optimal combinations of data wedges for further downstream analysis. 3.2. Cryocooled insulin: space group H3   These data sets were collected on Diamond MX beamlines I02, I03 and I04. Data were obtained from 14 crystals as short angular sweeps. The dendrogram from BLEND is shown in Fig. 3 ▶(a). The same test procedure as used for thaumatin was repeated and the results are presented in Fig. 3 ▶(b). The high-resolution cutoff was set to 1.92 Å. In this case only seven out of 13 groups of data sets fall in the lower quartile; however, 11 out of 13 groups still fall below the median line, again illustrating that clustering using unit-cell parameters leads to the selection of better than average merging sets. Numerical values for the selections suggested by BLEND are listed in Table 3 ▶. 3.3. Cryocooled microcrystals of insulin ultralente: space group H3   Very small (25 × 25 × 5 µm) crystals of ultralente insulin have been used in the treatment of diabetes for a number of years. Because of their small size, the structure of ultralente insulin was only recently solved (Wagner et al., 2009 ▶) using one of many data sets measured from microcrystals. Although the resolution was relatively high, refinement did not perform particularly well (see Table 4 ▶). It was thought that a multiple data-set approach to this problem might yield a better refined data set if isomorphism was conserved to a good degree. Nine data sets collected on beamline X06SA at the Swiss Light Source were used. As there were only nine data sets, all 502 combinations of these were merged and scaled using POINTLESS and SCALA for comparison to the BLEND results. These results, together with the combinations selected by BLEND, are shown in Fig. 4 ▶(b). It can be seen from the dendrogram in Fig. 4 ▶(a) that the nine data sets partition into two main groups, denoted here as group A (data sets 1–5) and group B (data sets 6–9). In order to evaluate whether this grouping reflects real differences between the data sets or is simply a consequence of the clustering procedure, merging statistics, R meas and R p.i.m., between pairs of data sets within and between groups A and B were calculated and plotted in Fig. 5 ▶. Merging statistics are listed in Table 5 ▶. It is clear that merged pairs within the groups show significantly better statistics. In order to investigate further, structure refinement was performed against group A (to 2.1 Å resolution) and group B (to 2.0 Å resolution) data and the models were compared. Using an initial model (PDB entry 2vk0; Wagner et al., 2009 ▶) rigid-body refinement was first carried out, followed by alternating cycles of model building with Coot (Emsley et al., 2010 ▶) and model refinement with REFMAC (Murshudov et al., 2011 ▶). Finally, a few cycles of TLS refinement with REFMAC were carried out to improve the model. The same test set as used in Wagner et al. (2009 ▶) was used for cross-validation and R free calculations. The statistic for the final models, model A and model B, are shown in Table 6 ▶. Model A refined to R and R free values of 18.5 and 24.8%, respectively, whereas the values for model B were 20.9 and 28.0%, respectively. Overall, the two models appear to be similar and the refinement statistics were improved, especially for model A, compared with the original structure. The ultralente structure, like other forms of insulin, is a dimer composed of a 21-residue chain (chain A) and a 30-­residue chain (chain B). Two of these dimers are contained within the asymmetric unit, and are described as four separate chains: A, B, C and D. The presence of zinc ions facilitates the formation of hexamers following H3 symmetry. Values for the r.m.s. positional difference between Cα atoms refined against group A and group B data for all atoms in the asymmetric unit and the four chains individually vary between 0.33 and 0.48 Å (values are calculated without first superposing the respective molecules). These differences are significant in comparison to the differences in unit-cell parameters between the two structures. This is compatible with some obvious differences at the secondary chain level: (i) the N-terminus of chain B can be modelled in group A, but not in group B (see Fig. 6 ▶ ▶ a), (ii) the N-terminus of chain D can be modelled in group B, but not in group A (see Fig. 6 ▶ b), (iii) in model A, His5 of chain D is a different rotamer to that in model B, and (iv) in Fig. 6 ▶ the F o − F c density map of group A shows (at 3σ) one additional Zn ion with respect to group B. Points (iii) and (iv) seem to be responsible to a great extent for the non-isomorphism, because chain B is connected to chain D of another dimer through a Zn ion only if His5 of chain D has the proper rotamer (see Fig. 6 ▶). In this example BLEND was able to generate a merged data set from the combination of data from multiple crystals that resulted in an improved structure compared with that refined from a single-crystal data set. 3.4. In situ room-temperature lysozyme: space group P43212   Test data sets from 12 crystals sitting within crystallization plates were collected on Diamond beamline I04-1. For all of the tests, the known space group P43212 was imposed using the BLEND keywords file. Interestingly, only data set 4 yielded a unique and correct space-group determination in POINTLESS. Fig. 7 ▶ shows similar results to those obtained previously using 50 random samples to determine the spread of R meas. Merging statistics are listed in Table 7 ▶. In this case, nine out of ten groups fall below the median and seven of them are very close to the lower inter-quartile range line. Two major branches appear in the dendrogram in Fig. 7 ▶(a), implying the existence of two isomorphous groups. However, cluster analysis has a natural tendency to form small and large clusters out of any collection of objects, even when distinct groups reflect no real macroscopic differences. This also holds true in the dendrograms produced by BLEND. The ultimate indicator of isomorphism across crystal components is em­bodied by the merging statistics. A sudden increase in R meas (and R p.i.m.) values when two groups of data sets merge to form a larger group can be an indication of non-isomorphism, whereas small variations in R meas accompanied by a decrease in R p.i.m. normally indicate a good degree of isomorphism of the merging groups. In this case the indication is that group A and group B have a reasonable degree of isomorphism. In other words, we might say that no major structural differences can be expected in structures refined against data sets from group A and group B. This is illustrated by Table 7 ▶, in which group A (data sets 1, 5, 7, 9 and 11) has an R meas of 0.128 (R p.i.m. = 0.048), while group B (data sets 2, 3, 4, 6, 8 and 10) has an R meas of 0.119 (R p.i.m. = 0.040). When merged, their combined R meas increases only slightly to 0.149 and R p.i.m. remains at 0.040. It can be concluded that the two groups are essentially isomorphous, with only very minor variations in structure. To support this conclusion, we refined the two structures using PDB entry 2hu1 as a starting model. The 2hu1 unit-cell parameters deposited were very close to those for groups A and B. The starting model 2hu1 was stripped of water molecules and counterions and a run of rigid-body refinement was carried out using REFMAC, followed by several cycles of positional refinement and individual B-factor refinement alternating between REFMAC and Coot for model building. TLS refinement was carried out in the last refinement cycle against group A data only. The results are summarized in Table 8 ▶. The two structures are essentially identical, with an r.m.s.d. using only the Cα atoms of 0.1 Å. The most notable differences were the number of water molecules and ions: four Cl− ions, eight Na+ ions and 64 waters for the group A model and four Cl− ions, seven Na+ ions and 57 waters in the group B model. 3.5. Cryocooled membrane protein: space group C2   In general, membrane-protein crystals diffract poorly. Consequently, long exposure times are required to obtain high-resolution data and often a full data set cannot be collected from a single position on the crystal owing to severe radiation damage. Generally, with large crystals data are collected at multiple positions on the crystal. Where the crystal is smaller multiple crystals must be used to generate the data. As might be expected for crystals with a high solvent content, there is a problem with non-isomorphism between different crystals and consequently many data sets are usually collected to be able to solve the structure. For this novel membrane protein, hereafter labelled memPROT, 22 data sets were collected from six crystals on the MX beamlines at Diamond Light Source. All crystals were mercury-derivatized in an attempt to provide additional phasing information. The data-set distribution across the six crystals is summarized in Table 9 ▶. Only three crystals (A34, A45 and yu60) yielded complete data sets from a single sweep at a single position on the crystal. Crystals M1S3 and M1S14 yielded 18 partial data sets, with rotation sweeps of 25° or 60°, while crystal y18 could only provide a 75% complete data set owing to radiation damage. BLEND was used to analyse all 22 derivative data sets and produced the dendrogram in Fig. 8 ▶(a). The merging statistics for the combined data sets from BLEND are tabulated in Table 10 ▶. Random samplings of clusters with two, three, four and up to 22 data sets, as for other test cases, provide the R meas benchmark values shown in Fig. 8 ▶(b) and, as in the previous examples, most of the BLEND merging groups show the lowest values of R meas. Selection of the best data set from Table 10 ▶ is based on a compromise between completeness and data quality as determined by R meas and R p.i.m.. In this case merged data sets with a completeness of >90% and an R meas of <0.15, corresponding to clusters 7, 9, 10, 15 and 18, were considered for further analysis. Of these, clusters 7 and 10 were subclusters of cluster 15 and were therefore discarded. Clusters 9, 15 and 18 were used for phasing, model building and refinement. In order to assess the potential benefits of these three BLEND-selected data sets over the existing complete crystal data sets A34, A45 and yu60, anomalous difference Fourier maps (Roach, 2003 ▶) were calculated using phases from the final refined model. Peak heights for the top 20 Fourier peaks were plotted for the six data sets (Fig. 8c ▶). Most noticeable in Fig. 8(c ▶) is that the peak heights for clusters 15 and 18 are higher than those for the single-sweep data sets A34, A45 and yu60. Peak heights for cluster 9 are comparable to those for data set A45, although lower than those for data set A34. The case corresponding to the anomalous map with data from all data sets has also been added to the other curves in Fig. 8 ▶(c). This curve is the highest, as one would expect given the level of isomorphism and the limited resolution for this case. The use of multiple crystals/data sets is a valid alternative to single crystals whenever data from single crystals do not provide sufficient completeness and/or phasing power. Fig. 8 ▶(c) also shows results for two other data sets, M1S3 and M1S14, corresponding to the combination of partial data sets collected from crystals M1S3 and M1S14, respectively. It happens that these two combined data sets are not associated with any node of the dendrogram from BLEND. Owing to the nature of the Ward clustering algorithm, other data sets interfered in groups belonging to either M1S3 or M1S14. This prevented, for instance, data sets 1 and 3 joining data set 2 to produce M1S3; similarly, for M1S14, clusters 15 and 12, forming most of M1S14, are prevented from merging into a single cluster because of data sets 1 and 3. Data sets M1S3 and M1S14 were investigated in this case because they were composed of partial data sets from the same crystals. It was logical to assume that the partial data sets would be more isomorphous with each other than those from separate crystals. Thus, in this test case, looking at the favourable statistics for clusters 12 and 15 one is led to believe that perhaps all partial data sets forming crystal M1S14 could provide a complete and good quality data set. Merging statistics for M1S3 and M1S14 are appended to the bottom of Table 10 ▶. The values for M1S14 are indeed remarkably good. The anomalous signal provided by this data set is also exceedingly high compared with those of the other data sets (see full green curve in Figure 8c ▶). Data set M1S14 was ultimately used to compute improved phases that resulted in electron density that showed more connectivity than that computed with other data sets. A portion of the electron density from experimental phases corresponding to different data sets is displayed in Fig. 9 ▶. The electron density calculated using A34 (Fig. 9 ▶ a) is visibly less connected than the electron density corresponding to M1S14 (Fig. 9 ▶ d). It is interesting to also observe better connectivity and more details for clusters 15 and 18 (Figs. 9 ▶ b and 9 ▶ c, respectively). Most importantly, as cluster 18 is formed out of contributions from crystals M1S3 and M1S14, if data from M1S14 had been insufficient to form a complete data set and/or to provide any useful anomalous signal, their union with data from M1S3 would have complemented them well for phasing and model building. This case illustrates the significant benefits of BLEND in providing a useful overview of numerous data sets in terms of their relative qualities and how BLEND can be persuasive in encouraging the combination of data from different crystals. It does, however, also identify a current weakness in BLEND whereby the clusters identified do not necessarily correspond to the best possible data-set combination. It is still noteworthy that BLEND was able to identify a cluster (cl18) comprising data from two crystals yielding a 99.1% complete data set with significant anomalous signal and resulting in a map with clearly interpretable secondary-structure features. 4. Conclusions and future work   In challenging (and increasingly more common) cases in which crystals are small, inhomogeneous and radiation-sensitive, complete data sets must be assembled from data from multiple crystals. A significant issue with multiple-crystal data is management and bookkeeping. BLEND provides a con­venient and quick way of analysing multiple data sets and presenting an informative overview of the key characteristics of the merged clusters. BLEND manages multiple data sets in a way that allows users not to become overburdened with tedious bookkeeping and to focus effort on obtaining high-quality, complete and redundant data. It has been shown in several examples that, on average, clustering based on unit-cell parameters has a propensity to form data sets with the best merging statistics without the need to explore the very large space of data-set combinations. High values of R meas or even the failure to scale certain combinations of data sets in all test cases presented, with the exception of ultralente insulin, cannot be ascribed exclusively to non-isomorphism but also arises from data incompleteness. For instance, both cryocooled thaumatin and cryocooled insulin have an R meas of 0.165 when merging all data sets together; this obviously means that they either come from reasonably isomorphous crystals or from reasonably isomorphous parts of the same crystal. Still, certain clusters with a small number of data sets display values of R meas that are much higher than 0.165. This is even more the case for many random combinations of data sets. The likely reason for this to occur is the lack of sufficient overlap for partial data sets and, accordingly, the impossibility of determining sensible scaling parameters for their combination. Both good completeness and good merging statistics are important, and clustering in BLEND achieves exactly this. It does not sacrifice merging quality in favour of completeness, for example. BLEND provides an option (-c) to include data sets not present in the dendrogram. As already explained, BLEND has the ability to automatically select groups with good merging statistics, thus reducing the enormous and mostly unachievable task of computing all possible data-set combinations. At the same time, BLEND has some built-in flexibility and allows users to decide which data sets should be combined and scaled according to the desired protocol. Ultimately, BLEND can only deal with the available data. It is the responsibility of other software or the user to ensure that all reciprocal-space regions are covered by the data. It would be advantageous to assess data quality on individual data sets prior to clustering. Suppose a data set presenting poor merging statistics joins a group with good statistics. For the new group the statistics will be worse because the new member of the group will have ‘polluted’ the cluster. This insertion might potentially have prevented the old group from directly joining a different data set or group of data sets that would have yielded better statistics. It is therefore very important to be able to detect bad, or rogue, data sets prior to the clustering process; the resulting dendrogram would likely appear more homogeneous. At present, no outlier rejection of this type is carried out in BLEND. This is principally because BLEND was designed to operate on very partial data sets that individually might not yield sensible merging statistics. Therefore, other quality indicators must be developed, perhaps measuring data dispersion around averages or similar statistical indicators of data concentration. In Figs. 4 ▶(b) and 7 ▶(b) a gap is observed in the sizes of the clusters. No groups of size 6, 7 or 8 exist for the insulin ultralente case, while no groups of size 7, 8, 9 or 10 exist for in situ lysozyme. The reason for this is the nature of hierarchical clustering. At every node new branches are formed in the dendrogram, with each branch carrying a greater number of objects than the branches from which it originated. This process accelerates rapidly at the top of the tree, where large branches emerge out of much smaller ones. Thus, it is unlikely that clusters of specific intermediate sizes will be observed when using hierarchical clustering. Other clustering tech­niques will have to be tried in order to create groups of any number of data sets. Non-hierarchical clustering techniques, such as k-means clustering (MacQueen, 1967 ▶), have the ability to form clusters of all sizes, from many smallest individual objects to the single overall final cluster. Other non-hierarchical techniques, such as fuzzy clustering, can avoid the problem of forming clusters of well defined size by assigning weights that quantify the degree of association of an individual object to a given group. These clustering methods will be incorporated into BLEND in the future. One of the main reasons to collect small wedges of data from multiple crystals is to avoid long crystal exposure and, accordingly, to keep intensity bias arising from radiation damage to a minimum. However, it is still desirable during the analysis of multiple data sets to try and eliminate diffraction images that appear to have been severely affected by radiation. Procedures for the robust handling of radiation damage are currently being developed. The reduction of bias caused by radiation damage is one of the main gains derived from using multiple data sets (Hanson et al., 2012 ▶) and in the extreme can be used to avoid any significant damage altogether, co-existing with other valid methods such as damage estimation, cryocooling and long exposure times in conjunction with fast readout methods (Holton & Frankel, 2010 ▶; Owen et al., 2012 ▶). An important by-product of the higher redundancy gained by collection from multiple data sets is the potential increase in anomalous signal and and resulting phasing power, as demonstrated elsewhere by Liu et al. (2011 ▶) and here with the memPROT example of §3.5. Currently, no separation of data sets based on wavelength is included in the program. Accordingly, no provisions to cater for anomalous phasing preparation are built into the code. The user will have to treat data for the same crystal at different wavelengths as different data sets and carry out cluster analysis on all of them at the same time. Enhancement of the anomalous signal is to be expected for those groups of isomorphous crystals for which data were collected at the same wavelength. BLEND’s management of multi-wavelength and isomorphous replacement data will be improved to assist users in the analysis of such data. Cluster analysis, and the subsequent scaling and merging of data sets corresponding to its nodes, can be successful in grouping together isomorphous data and separating non-isomorphous data. This is, for instance, what emerges in the insulin ultralente case, where group A and group B corresponded to clear structural differences. On the other hand, in the example of in situ lysozyme the two main branches did not correspond to any major non-isomorphism but rather pointed to minor differences. This illustrates a level of subjectivity in drawing conclusions about whether the clusters actually reflect significant isomorphism or not. This is clearly a user-dependent choice to be made based on the level of structural detail that the user is interested in studying. Ultimately, the most informative way of using BLEND is to generate multiple clusters, each corresponding to different levels of similarity or isomorphism, and analyse each one up to the evaluation of electron density. In this way, BLEND is a valuable addition to any automated pipeline dealing with multiple data sets from multiple crystals from the same project. Supplementary Material Supplementary material file. DOI: 10.1107/S0907444913012274/dz5278sup1.pdf
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Ward's Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm

              , (2011)
              The Ward error sum of squares hierarchical clustering method has been very widely used since its first description by Ward in a 1963 publication. It has also been generalized in various ways. However there are different interpretations in the literature and there are different implementations of the Ward agglomerative algorithm in commonly used software systems, including differing expressions of the agglomerative criterion. Our survey work and case studies will be useful for all those involved in developing software for data analysis using Ward's hierarchical clustering method.
                Bookmark

                Author and article information

                Journal
                Acta Crystallogr D Struct Biol
                Acta Crystallogr D Struct Biol
                Acta Cryst. D
                Acta Crystallographica. Section D, Structural Biology
                International Union of Crystallography
                2059-7983
                01 May 2018
                24 April 2018
                24 April 2018
                : 74
                : Pt 5 ( publisher-idID: d180500 )
                : 441-449
                Affiliations
                [a ] RIKEN SPring-8 Center , Sayo 679-5148, Japan
                Author notes
                Correspondence e-mail: yamamoto@ 123456riken.jp
                Author information
                https://orcid.org/0000-0002-1311-1768
                Article
                wa5117 ACSDAD S2059798318004576
                10.1107/S2059798318004576
                5930351
                29717715
                fb75c6b2-1410-4880-94b3-4ce1fcc7412b
                © Yamashita et al. 2018

                This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.

                History
                : 06 January 2018
                : 19 March 2018
                Page count
                Pages: 9
                Funding
                Funded by: Japan Society for the Promotion of Science
                Award ID: 15H06862
                Funded by: Japan Agency for Medical Research and Development
                This work was funded by Japan Society for the Promotion of Science grant 15H06862. Japan Agency for Medical Research and Development grant .
                Categories
                Research Papers

                automatic data processing,microcrystals,kamo,small-wedge data sets

                Comments

                Comment on this article