11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Chloroflexus aurantiacus acetyl-CoA carboxylase evolves fused biotin carboxylase and biotin carboxyl carrier protein to complete carboxylation activity

      research-article

      Read this article at

      ScienceOpenPublisherPMC
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          ABSTRACT

          Acetyl-CoA carboxylases (ACCs) convert acetyl-CoA to malonyl-CoA, a key step in fatty acid biosynthesis and autotrophic carbon fixation pathways. Three functionally distinct components, biotin carboxylase (BC), biotin carboxyl carrier protein (BCCP), and carboxyltransferase (CT), are either separated or partially fused in different combinations, forming heteromeric ACCs. However, an ACC with fused BC-BCCP and separate CT has not been identified, leaving its catalytic mechanism unclear. Here, we identify two BC isoforms (BC1 and BC2) from Chloroflexus aurantiacus, a filamentous anoxygenic phototroph that employs 3-hydroxypropionate (3-HP) bi-cycle rather than Calvin cycle for autotrophic carbon fixation. We reveal that BC1 possesses fused BC and BCCP domains, where BCCP could be biotinylated by E. coli or C. aurantiacus BirA on Lys553 residue. Crystal structures of BC1 and BC2 at 3.2 Å and 3.0 Å resolutions, respectively, further reveal a tetramer of two BC1-BC homodimers, and a BC2 homodimer, all exhibiting similar BC architectures. The two BC1-BC homodimers are connected by an eight-stranded β-barrel of the partially resolved BCCP domain. Disruption of β-barrel results in dissociation of the tetramer into dimers in solution and decreased biotin carboxylase activity. Biotinylation of the BCCP domain further promotes BC1 and CTβ-CTα interactions to form an enzymatically active ACC, which converts acetyl-CoA to malonyl-CoA in vitro and produces 3-HP via co-expression with a recombinant malonyl-CoA reductase in E. coli cells. This study revealed a heteromeric ACC that evolves fused BC-BCCP but separate CTα and CTβ to complete ACC activity.

          IMPORTANCE

          Acetyl-CoA carboxylase (ACC) catalyzes the rate-limiting step in fatty acid biosynthesis and autotrophic carbon fixation pathways across a wide range of organisms, making them attractive targets for drug discovery against various infections and diseases. Although structural studies on homomeric ACCs, which consist of a single protein with three subunits, have revealed the “swing domain model” where the biotin carboxyl carrier protein (BCCP) domain translocates between biotin carboxylase (BC) and carboxyltransferase (CT) active sites to facilitate the reaction, our understanding of the subunit composition and catalytic mechanism in heteromeric ACCs remains limited. Here, we identify a novel ACC from an ancient anoxygenic photosynthetic bacterium Chloroflexus aurantiacus, it evolves fused BC and BCCP domain, but separate CT components to form an enzymatically active ACC, which converts acetyl-CoA to malonyl-CoA in vitro and produces 3-hydroxypropionate (3-HP) via co-expression with recombinant malonyl-CoA reductase in E. coli cells. These findings expand the diversity and molecular evolution of heteromeric ACCs and provide a structural basis for potential applications in 3-HP biosynthesis.

          Related collections

          Most cited references50

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          PHENIX: a comprehensive Python-based system for macromolecular structure solution

          1. Foundations 1.1. PHENIX architecture The PHENIX (Adams et al., 2002 ▶) architecture is designed from the ground up as a hybrid system of tightly integrated interpreted (‘scripted’) and compiled software modules. A mix of scripted and compiled components is invariably found in all major successful crystallographic packages, but often the scripting is added as an afterthought in an ad hoc fashion using tools that predate the object-oriented programming era. While such ad hoc systems are quickly established, they tend to become a severe maintenance burden as they grow. In addition, users are often forced into many time-consuming routine tasks such as manually converting file formats. In PHENIX, the scripting layer is the heart of the system. With only a few exceptions, all major functionality is implemented as modules that are exclusively accessed via the scripting interfaces. The object-oriented Python scripting language (Lutz & Ascher, 1999 ▶) is used for this purpose. In about two decades, a large developer/user community has produced millions of lines of highly uniform, interoperable, mature and openly available sources covering all aspects of programming ranging from simple file handling to highly sophisticated network communication and fully featured cross-platform graphical interfaces. Embedding crystallographic methods into this environment enables an unprecedented degree of automation, stability and portability. By design, the object-oriented programming model fosters shared collaborative development by multiple groups. It is routine practice to hierarchically recombine modules written by different groups into ever more complex procedures that appear uniform from the outside. A more detailed overview of the key software technology leading to all these advances, presented in the context of crystallography, can be found in Grosse-Kunstleve et al. (2002 ▶). In addition to the advantages outlined in the previous paragraph, the scripting language is generally most efficient for the rapid development of new algorithms. However, run­time performance considerations often dictate that numerically intensive calculations are eventually implemented in a compiled language. The first choice of a compiled language is of course to reuse the same language environment as used for the scripting language itself, which is a C/C++ environment. Not only is this the mainstream software environment on all major platforms used today, but with probably hundreds of millions of lines of C/C++ sources in existence it is an environment that is virtually guaranteed to thrive in the long term. An in-depth discussion of the combined use of Python and C++ can be found in Grosse-Kunstleve et al. (2002 ▶) and Abrahams & Grosse-Kunstleve (2003 ▶). This model is used throughout the PHENIX system. 1.2. Graphical user interface A new graphical user interface (GUI) for PHENIX was introduced in version 1.4. It uses the open-source wxPython toolkit, which provides a ‘native’ look on each operating system. Development has focused on providing interfaces around the existing command-line programs with minimal modification, using the same underlying configuration system (libtbx.phil) as used by most PHENIX programs as a template to automatically generate controls. Because these programs are implemented primarily as Python modules, complex data including models, reflections and other viewable data may be exchanged with the GUI without resorting to parsing log files. The current PHENIX release (version 1.5) includes GUIs for phenix.refine (Afonine et al., 2005 ▶), phenix.xtriage (Zwart et al., 2005 ▶), the AutoSol (Terwilliger et al., 2009 ▶), AutoBuild (Terwilliger, Grosse-Kunstleve, Afonine, Moriarty, Adams et al., 2008 ▶) and LigandFit (Terwilliger et al., 2006 ▶) wizards, the restraints editor REEL, all of the validation tools and several utilities for creating and manipulating maps and reflection files. More recent builds of PHENIX contain a new GUI for the AutoMR wizard and future releases will include a new interface for Phaser (McCoy et al., 2007 ▶). Intrinsically graphical data is visualized with embedded graphs (using the free matplotlib Python library) or a simple OpenGL viewer. This simplifies the most complex parameters, such as atom selections in phenix.refine, which can be visual­ized or picked interactively with the built-in viewer. The GUI also serves as a platform for additional automation and user customization. Similarly to the CCP4 interface (CCP4i; Potterton et al., 2003 ▶), PHENIX manages data and task history for separate user-defined projects. Default parameters and input files can be specified for each project; for instance, the generation of ligand restraints from the phenix.refine GUI gives the user the option of automatically loading these restraints in future runs. The popularity of Python as a scientific programming language has led to its use in many other structural-biology applications, especially molecular-graphics software. The PHENIX GUI includes extension modules for the modeling programs Coot (Emsley & Cowtan, 2004 ▶) and PyMOL (DeLano, 2002 ▶), both of which are controlled remotely from PHENIX using the XML-RPC protocol. This allows the interfaces to integrate seamlessly; any model or map in PHENIX can be automatically opened in Coot with a single click. In programs that iteratively rebuild or refine structures, such as AutoBuild and phenix.refine, the current model and maps will be continually updated in Coot and/or PyMOL as soon as they are available. In the validation utilities, clicking on any atom or residue flagged for poor statistics will recentre the graphics windows on that atom. Remote control of the PHENIX GUI is also simple using the same protocol and simple extensions to the Coot interface provide direct launching of phenix.refine with a model pre-loaded. 2. Analysis of experimental data PHENIX has a range of tools for the analysis, validation and manipulation of X-ray diffraction data. A comprehensive tool for analyzing X-ray diffraction data is phenix.xtriage (Zwart et al., 2005 ▶), which carries out tests ranging from space-group determination and detection of twinning to detection of anomalous signal. These tests provide the user and the various wizards with a set of statistics that characterize a data set. For analysis of twinning, phenix.xtriage consolidates a number of statistics to provide a balanced verdict of possible symmetry and twin-related issues with the data. Phenix.xtriage provides the user with feedback on the overall characteristics of the data. Routine usage of phenix.xtriage during or immediately after data collection has resulted in the timely discovery of twinning or other issues (Flynn et al., 2007 ▶; Kostelecky et al., 2009 ▶). Detection of these idiosyncrasies in the data typically reduces the overall effort in a successful structure determination. A likelihood-based estimation of the overall anisotropic scale factor is performed using the likelihood formalism described by Popov & Bourenkov (2003 ▶). Database-derived standard Wilson plots for proteins and nucleic acids are used to detect anomalies in the mean intensity. These anomalies may arise from ice rings or other issues (Morris et al., 2004 ▶). Data strength and low-resolution completeness are also analysed. The presence of anomalous signal is detected by analysis of the measurability, a quantity expressing the fraction of statistically significant Bijvoet differences in a data set (Zwart, 2005 ▶). The native Patterson function is used to detect the presence of pseudo-translational symmetry. A database-derived empirical distribution of maximum peak heights is used to assign significance to detected peaks in the Patterson function. A comprehensive automated twinning analysis is per­formed. Twin laws are derived from first principles to facilitate the identification of pseudo-merodehral cases. Amplitude and intensity ratios, 〈|E 2 − 1|〉 values, the L-statistic (Padilla & Yeates, 2003 ▶) and N(Z) plots are derived from data cut to the resolution limit suggested by the data-strength analysis. The removal of shells of data with relatively high noise content greatly improves the automated interpretation of these statistics. A Britton plot, H-test and a likelihood-derived approach are used to estimate twin fractions when twin laws are present. If a model has been supplied, an R versus R (Lebedev et al., 2006 ▶) analysis is carried out. This type of analysis is of particular use when dealing with pseudo-symmetry, space-group problems and twinning (Zwart et al., 2008 ▶). To test for inconsistent indexing between different data sets, a set of reindexing laws is derived from first principles given the unit cells and space groups of the sample and reference data sets. A correlation analysis suggests the most likely choice of reindexing of the data. Analysis of the metric symmetry of the unit cell provides a number of likely point groups. A likelihood-inspired method is used to suggest the most likely point group of the data. Subsequent analysis of systematic absences in a likelihood framework ranks subsequent space-group possibilities (details to be published). 3. Substructure determination, phasing and molecular replacement After ensuring that the diffraction data are sound and understood, the next critical necessity for solving a structure is the determination of phases using one of several strategies (Adams, Afonine et al., 2009 ▶). 3.1. Substructure determination The substructure-determination procedure implemented as phenix.hyss (Hybrid Substructure Search; Grosse-Kunstleve & Adams, 2003 ▶) combines the multi-trial dual-space recycling approaches pioneered by Shake-and-Bake (Miller et al., 1994 ▶) and later SHELXD (Sheldrick, 2008 ▶) with the use of the fast translation function (Navaza & Vernoslova, 1995 ▶; Grosse-Kunstleve & Brunger, 1999 ▶). The fast translation function is the basis for a systematic search in the Patterson function (performed in reciprocal space), in contrast to the stochastic alternative of SHELXD (performed in direct space). Phenix.hyss is the only substructure-determination program to fully integrate automatic comparison of the substructures found in multiple trials via a Euclidean Model Matching procedure (part of the cctbx open-source libraries). This allows phenix.hyss to detect if the same solution was found multiple times and to terminate automatically if this is the case. Extensive tests with a variety of SAD data sets (Grosse-Kunstleve & Adams, 2003 ▶) have led to a parameterization of the procedure that balances runtime considerations and the likelihood that repeated solutions present the correct substructure. In many cases the procedure finishes in seconds if the substructure is detectable from the input data. 3.2. Phasing Phaser, available in PHENIX as phenix.phaser, applies the principle of maximum likelihood to solving crystal structures by molecular replacement, by single-wavelength anomalous diffraction (SAD) or by a combination of both. The likelihood targets take proper account of the effects of different sources of error (and, in the case of SAD phasing, their correlations) and allow different sources of information to be combined. In solving a molecular-replacement problem with a number of different components, the information gained from a partial solution increases the signal in the search for subsequent components. Because the likelihood scores for different models can be directly compared, decisions among models can readily be made as part of automation strategies (discussed below). 3.3. Noncrystallographic symmetry (NCS) Noncrystallographic symmetry is an important feature of many macromolecular crystals that can be used to greatly improve electron-density maps. PHENIX has tools for the identification of NCS and for using NCS and multiple crystal forms of a macromolecule in phase improvement. Phenix.find_ncs and phenix.simple_ncs_from_pdb are tools for the identification of noncrystallographic symmetry in a structure using information from a heavy-atom substructure or an atomic model. Phenix.simple_ncs_from_pdb will identify NCS and generate transformations from the chains in a model in a PDB file. Phenix.find_ncs will identify NCS from either a heavy-atom substructure (Terwilliger, 2002a ▶) or the chains in a PDB file and will then compare this NCS with the density in a map to verify that the NCS is actually present. Phenix.multi_crystal_average is a method for combining information from several crystal forms of a structure. It is especially well suited to cases where each crystal form has its own NCS, adjusting phases for each crystal form so that all the NCS copies in all crystals are as similar as possible. NCS restraints should normally be applied in density modification and model building in all cases except where there is clear evidence that NCS is not present. In density modification within PHENIX the presence of NCS is identified from the heavy-atom sites or from an atomic model if available. The local correlation of density in NCS-related locations is then used automatically to set variable restraints on NCS symmetry in the map. In refinement, NCS symmetry is applied through coordinate restraints, targeting the positions of each NCS copy relative to those of the other NCS-related chains. The default NCS restraints in PHENIX are very tight, with targets of 0.05 Å r.m.s. At resolutions lower than about 2.5 Å these tight restraints on NCS should usually be applied. At higher resolutions it may be appropriate to use looser restraints or to remove them altogether. Additionally, if there are segments of the chains that clearly do not obey the NCS relationships they should be excluded from the NCS restraints. Normally this is performed automatically, but it can also be specified explicitly. 4. Model building, ligand fitting and nucleic acids Key steps in the analysis of a macromolecular crystal structure are building an initial core model, identification and fitting of ligands into the electron-density map and building an atomic model for loop regions that are less well defined than the majority of the structure. PHENIX has tools for rapid model building of secondary structure and main-chain tracing (phenix.find_helices_strands) and for the fitting of flexible ligands (phenix.ligandfit) as well as for fitting a set of ligands to a map (phenix.find_all_ligands) and for the identification of ligands in a map (phenix.ligand_identification). PHENIX additionally has a tool for the fitting of missing loops (phenix.fit_loops). Validation tools are provided so that the models produced can be validated at each step along the way. 4.1. Model building Phenix.find_helices_strands will rapidly build a secondary-structure-only model into a map or very rapidly trace the polypeptide backbone of a model into a map. To build secondary structure in a map, phenix.find_helices_strands identifies α-helical regions and β-strand segments, models idealized helices and strands into the corresponding density, allowing for bending of the helices and strands, and assembles these into a composite model. To very rapidly trace the main chain in a map, phenix.find_helices_strands finds points along ridgelines of high density where Cα atoms might be located, identifies pairs and then triplets of these Cα atoms that have density between the atoms and plausible geometry, constructs all possible connections of these Cα atoms into nonamers and then identifies all the longest possible chains that can be made by joining the nonamers. This process can build a Cα model at a rate of about 20 residues per second, yielding a backbone model that can readily be interpreted visually or automatically to evaluate the quality of the map that it is based on. Phenix.fit_loops will fit missing loops in an atomic model. It uses RESOLVE model building (Terwilliger, 2003a ▶,b ▶,c ▶) to extend the chain from either end where a loop is missing and to connect the chains into a loop with the expected number of residues. 4.2. Ligand fitting Phenix.ligandfit is a tool for fitting a flexible ligand into an electron-density map (Terwilliger et al., 2006 ▶). The key approaches used are breaking the ligand into its component rigid-body parts, finding where each of these can be placed into density, tracing the remainder of the ligand based on the positions of these core rigid-body parts and recombining the best parts of multiple fits while scoring based on the fit to the density. Phenix.find_all_ligands is a tool for finding all the instances of each of several ligands in an electron-density map. Phenix.find_all_ligands finds the largest contiguous region of unused density in a map and uses phenix.ligandfit to fit each supplied ligand into that density. It then chooses the ligand that has the highest real-space correlation to the density (Terwilliger, Adams et al., 2007 ▶). It then repeats this process until no ligands can be satisfactorily fitted into any remaining density in the map. Phenix.ligand_identification is a tool for identifying which ligands are compatible with unknown electron density in a map (Terwilliger, Adams et al., 2007 ▶). It can search using the 200 most common ligands from the PDB or from a user-supplied list of ligands. Phenix.ligand_identification uses phenix.ligandfit to fit each ligand to the map and identifies the best-fitting ligand using the real-space correlation and surface complementarity of the ligand and the atoms in the structure surrounding the ligand-binding site. 4.3. RNA and DNA In common with most macromolecular crystallographic tools, PHENIX was originally developed with protein structures primarily in mind. Now that nucleic acids, and especially RNA, are increasingly important in large biological structures, the system is being modified in places where subtle differences in procedure are needed rather than just the relevant libraries. Model building in phenix.autobuild now has a preliminary set of nucleic acid procedures that take advantage of the relatively well determined phosphate and base positions, as well as the preponderance of double helix, and that make use of the RNA backbone conformers recently defined by the RNA Ontology Consortium (Richardson et al., 2008 ▶). Nucleic acid structures benefit significantly from torsion-angle refinement, which has recently been added to the options in phenix.refine. A principal problem in RNA models is getting the ribose pucker correct, although it is known to consist almost entirely of either C3′-endo (which is commoner and that found in the A-form helix) or C2′-endo (Altona & Sundaralingam, 1972 ▶). MolProbity uses the perpendicular distance from the 3′ phosphate to the line of the C1′—N1/9 glycosidic bond as a reliable diagnostic of ribose pucker (Davis et al., 2007 ▶; Chen et al., 2010 ▶). This same test has now been built into phenix.refine to allow the use of pucker-specific target parameters for bond lengths, angles and torsions (Gelbin et al., 1996 ▶) rather than the uneasy compromise values (Parkinson et al., 1996 ▶) used in most pucker-agnostic refinement. Currently, if an incorrect pucker is diagnosed it must usually be fixed by user rebuilding, for instance in Coot (Emsley & Cowtan, 2004 ▶) or in RNABC (Wang et al., 2008 ▶). A rebuilding functionality will probably be incorporated into PHENIX soon, but in the meantime the refinement will now correctly maintain the geometry of a C2′-­endo pucker once it has been built and identified using conformation-specific residue names. 4.4. Maps, models and avoiding bias Phenix.refine (and the graphical tool phenix.create_maps) can produce various types of maps, including anomalous difference, maximum-likelihood weighted (p*mF obs − q*DF model)exp(iαmodel) and regular (p*F obs − q*F model)exp(iαmodel), where p and q are any user-defined numbers, filled and kick maps. The coefficients m and D of likelihood-weighted maps (Read, 1986 ▶) are computed using test-set reflections as described in Lunin & Skovoroda (1995 ▶) and Urzhumtsev et al. (1996 ▶). Data incompleteness, especially systematic incompleteness, can cause map distortions (Lunin, 1988 ▶; Tronrud, 1997 ▶). An approach to remedying this problem is to replace (‘fill’) missing observations with nonzero values. One can use DF model (similarly to REFMAC; Murshudov et al., 1997 ▶) to replace the missing F obs or use 〈F obs〉, where the F obs are averaged across a resolution bin around the missing F obs value. Based on a limited number of tests, both ‘filling’ schemes produce similar results, reiterating the importance of phases. However, it is important to keep in mind that by replacing missing F obs there is a risk of introducing bias and obviously the more incomplete the data is the larger the risk. At present it is advisable to use both maps simultaneously: filled and not filled. An average kick map (AK map; Gunčar et al., 2000 ▶; Turk, 2007 ▶; Pražnikar et al., 2009 ▶) is the result of the following procedure. A large ensemble of structures is created where the coordinates of each structure from the ensemble are all randomly shaken. A map is then computed for each structure. Finally, all maps are averaged to generate one AK map. An AK map is expected to have less bias and less noise and to enhance the existing signal and can potentially clarify some initially bad densities. A computationally intensive but powerful method of creating a very low-bias map is to carry out iterative model building and refinement while omitting one region of the map from all calculations of structure factors (Terwilliger, Grosse-Kunstleve, Afonine, Moriarty, Adams et al., 2008 ▶). The phenix.autobuild iterative-build OMIT map procedure carries this out automatically for either a single OMIT region or for overlapping OMIT regions to create a composite iterative-build OMIT map. 5. Model, and model-to-data, validation The result of crystallographic structure determination is the atomic model. There are three principal components in assessing model quality: the covalent model geometry, the model stereochemistry and the quality of fit between the model and experimental data in both real space and in reciprocal space. All three provide overall measures, and the first two plus the real-space aspect of the third also provide checks for local outliers, which give the best leverage for user intervention to actively improve model accuracy (Arendall et al., 2005 ▶). (Validation of the experimental data was described in §2 above.) PHENIX includes many individual tools for specific aspects of validation, plus several systems that combine those results into overall summaries. Validation is provided both for user evaluation of the progress and results of a structure solution and also to help inform the automated choices made by other parts of the system. Most aspects of the MolProbity model-validation tools (Davis et al., 2007 ▶; Chen et al., 2010 ▶) have been adapted or rewritten for integrated use within PHENIX and are pre­sented to the user by the new GUI (§1.2). H atoms are added by phenix.reduce, with optimization of entire local hydrogen-bond networks, consideration of the first layer of crystallo­graphic waters and optional correction of side-chain amide or histidine 180° ‘flips’ (Word, Lovell, Richardson et al., 1999 ▶). All-atom contacts (Word, Lovell, LaBean et al., 1999 ▶) are calculated by phenix.probe, which provides the atomic overlap information needed for the validation of serious all-atom steric clashes and can also be visualized in Coot. For the PHENIX GUI, the set of MolProbity-based tools provides both overall model statistics, such as clashscore and percentage of outliers, and detailed lists of the Ramachandran (Lovell et al., 2003 ▶), rotamer (Lovell et al., 2000 ▶), Cβ deviation (Lovell et al., 2003 ▶) and clash outliers. Command-line tools are available for these validation methods: phenix.rotalyze, phenix.ramalyze, phenix.cbetadev, phenix.clashscore, phenix.reduce and phenix.probe. Additionally, phenix.validate_model, which analyzes the deviations of bond lengths, bond angles, planarity etc. from ideal library values, complements the MolProbity torsional and atomic clash tools. Phenix.real_space_correlation asserts the local model-to-data correspondence by providing a quantitative measure of how the atomic model fits the electron-density map at the residue or atom level (depending on the resolution). Rapidly obtaining a snapshot of global figures of merit for a crystallo­graphic model and associated experimental data is a frequent task that is performed at all stages of structure solution. This task can be complicated for several reasons: the presence of novel ligands or nonstandard residues in the PDB-format (Berman et al., 2000 ▶) coordinate file, data collected from twinned crystals, various reflection datafile formats, different representation of atomic displacement parameters in the presence of TLS (Schomaker & Trueblood, 1968 ▶), experimental data type (X-­ray and/or neutron), files with multiple models and various formatting issues. Phenix.model_vs_data is designed to automatically handle all these complications with minimal user input (a PDB file and a reflection data file) and provide a concise summary output. Phenix.polygon (Urzhumtseva et al., 2009 ▶) is a graphical tool that is designed to indicate the similarity of validation parameters, such as free R value, for a particular structure compared with those deposited in the PDB. This comparison is performed for all other structures solved at similar resolution limits. The result is presented graphically. Phenix.validation combines all of the tools described above in one GUI, providing a single place for assessing the results of structure determination. 5.1. Model and structure-factor manipulation and analysis PHENIX has a range of tools for displaying, analyzing and manipulating structure-factor and model information. Phenix.mtz.dump and phenix.cif_as_mtz display and convert structure-factor data. Phenix.print_sequence, phenix.pdb_atom_selection and phenix.pdbtools display and manipulate coordinate files. Phenix.tls is a tool for the extraction and manipulation of TLS information. Using this tool, TLS matrices and selections can be extracted from REFMAC- or PHENIX-formatted PDB file headers and the total or residual atomic B factors can be computed and output. Future functionality will include the complete analysis of TLS matrices and their graphical visual­ization. Phenix.get_cc_mtz_mtz and phenix.get_cc_mtz_pdb are tools for analyzing the agreement between maps based on a pair of MTZ files or between maps calculated from an MTZ file and a PDB file. The key attributes of these tools are that they automatically search all allowed origin shifts that might relate the two maps and that they write out a modified version of one of the MTZ files or of the PDB file, shifted to match the other. 6. Structure refinement Phenix.refine is the state-of-the-art crystallographic structure-refinement engine of PHENIX. The foundational refinement machinery is a combination of highly efficient programming tools and new or rethought crystallographic algorithms. Phenix.refine possesses an extensive set of tools that cover the majority of refinement scenarios at any data resolution from low to ultrahigh. Various reflection-data formats (for example, CNS, MTZ and SHELX) are recognized automatically. The input experimental data are checked for outliers (Read, 1999 ▶; Zwart et al., 2005 ▶) and any reflections identified as such are excluded from the refinement calculations. Twinning can also be taken into account by providing a twin-law operator, which can be obtained using phenix.xtriage. Both X-ray and/or neutron diffraction data can be used and an option for joint XN refinement is available (simultaneous refinement against X-­ray and neutron data; Adams, Mustyakimov et al., 2009 ▶). Each refinement run begins with robust mask-based bulk-solvent correction and anisotropic scaling (Afonine et al., 2005 ▶). Tools such as efficient rigid-body refinement (multiple-zones algorithm; Afonine et al., 2009 ▶), simulated-annealing refinement (Brünger et al., 1987 ▶) in Cartesian or torsion-angle space (Grosse-Kunstleve et al., 2009 ▶), automatic NCS detection and its use as restraints in refinement are important at low resolution and in the initial stages of refinement. A broad range of atomic displacement parameterizations are available, including grouped isotropic, constrained anisotropic (TLS) and individual atomic isotropic or anisotropic, allowing efficient modelling of atomic displacement parameters at any resolution. Occupancy refinement (grouped, individual, group constrained for alternative conformations or any mixture) can be performed for any user-defined atoms. Atoms in alternative conformations are recognized automatically based on altLoc identifiers in the input PDB file and their occupancies are refined by default. Ordered solvent (water) model updating is integrated into the refinement process. The availability of ultrahigh-resolution data makes it possible to visualize the residual density arising from bonding effects; phenix.refine employs a novel interatomic scatterers model (Afonine et al., 2007 ▶) to adequately account for these features. A flexible parameterization of H atoms allows their use at any resolution from subatomic (where their parameters can be refined individually) to low resolution (where a riding model is used). Refinement can be performed using a variety of refinement target functions, including maximum likelihood, maximum likelihood with experimental phase information and amplitude least squares. The refinement of coordinates can be performed in real or reciprocal space (allowing dual-space refinement). Novel ligands can easily be included in refinement by providing a corresponding CIF file as input (the CIF file can be automatically created using phenix.ready_set). Manual fixing of amino-acid side-chain rotamers can be time-consuming, especially for large structures. Although the use of simulated-annealing refinement increases the convergence radius, it can still fail to fit incorrectly modelled side chains into the correct density. Phenix.refine has an option for automatic selection of the best rotamer based on a rotamer library (Lovell et al., 2000 ▶) and optimal fit into the density (details to be published elsewhere). Furthermore, coupling real-space refinement with the built-in rotamer library and available MolProbity tools allows the automated identification and robust correction of common systematic errors involving backward-fit conformations for Leu, Thr, Val, Ile and Arg side chains, as developed and tested in the Autofix method (Headd et al., 2009 ▶). Phenix.refine allows multi-step complex refinement protocols in which most of the available refinement strategies can be combined with each other and applied to any selected part of the model. For example, a run of phenix.refine may perform rigid-body refinement, simulated annealing, individual and grouped B factors combined with TLS refinement, constrained occupancy refinement and automatic water picking. The output of phenix.refine includes various maps (maximum-likelihood weighted, kicked, incompleteness corrected, anomalous difference and those with any user-defined coefficients), complete model and data statistics and PDB file with a formatted REMARK 3 header ready for PDB deposition. The phenix.refine GUI is integrated with Coot and PyMOL, allowing seamless visual analysis of the refined model and associated maps. Phenix.refine is tightly integrated with other PHENIX components, making structure solution, building and refinement a one-step process (for example, in the AutoMR and AutoBuild wizards). It is routinely tested by automatic re-refinement of all models in the PDB for which the experimental data are available. 6.1. Ligand-coordinate and restraint-geometry generation The electronic Ligand Builder and Optimization Builder (eLBOW; Moriarty et al., 2009 ▶) is a suite of tools designed for the reliable generation of Cartesian coordinates and geometry restraints for both novel and known ligands. In line with the rest of the PHENIX package, the eLBOW modules are written in Python, with the numerically intensive portions of the code written in C++. eLBOW is a flexible platform for converting a majority of common chemical inputs to optimized three-dimensional coordinates and geometry restraints for refinement. Ligand geometries can be minimized using the semi-empirical AM1 quantum-chemical method (Stewart, 2004 ▶), a numerically efficient and chemically accurate technique for the class of molecules commonly complexed with or bound to proteins. In addition, a graphical user interface for editing geometry restraints and simple geometry manipulation of ligands has been developed. The Restraints Editor, Especially Ligands (REEL) removes the tedium of manually editing a restraints file by providing a number of commonly performed actions via pull-down menus and other interactive features. The effect of changes in the restraints can be immediately reflected in the molecule view to provide user feedback. A tool that uses many of the features of eLBOW to quickly and easier prepare a protein model for refinement is known as ReadySet! The flexibility of the Python interface is exemplified by the use of Reduce, eLBOW and several smaller portions of the cctbx toolkit to add H and/or D atoms to the model, ligands and water and to generate metal-coordination files and geometry restraints for unknown ligands. The files required for covalently bound ligands are also generated. 7. Integrated structure determination 7.1. Why automation? Automation has dramatically changed macromolecular crystallography over the past decade, both by greatly speeding up the process of structure solution, model building and refinement and by bringing the tools for structure determination to a much wider group of scientists. As automation becomes increasingly comprehensive, it will allow users to test many more possibilities for structure determination, will allow improved estimation of uncertainties in the final structures and will allow the determination of ever more complex and difficult structures. The PHENIX environment has been developed with automation as a key and defining feature. Each tool within PHENIX can seamlessly and nearly effortlessly be incorporated as part of any other tool or process in PHENIX. This means that very complex tasks can be built up from well tested and characterized tools and that tools and higher-level methods can be re-used in many different contexts. With a full automatic regression testing system as an integral part of the PHENIX environment, all these tasks and high-level methods are tested daily to ensure the integrity of the entire PHENIX system. 7.2. Automated structure solution PHENIX has fully integrated structure-solution capability for both experimental phasing (MAD, SAD, MIR and com­binations of these), carried out by phenix.autosol, and for molecular replacement, performed by phenix.automr. Each of these automated procedures feeds directly into the iterative model building, density modification and refinement of phenix.autobuild. Phenix.autosol is designed to allow complete automation of experimental phasing while allowing a high degree of flexibility for advanced users. Beginning with structure-factor amplitudes and the sequence of the macromolecule, phenix.autosol uses phenix.solve (Terwilliger & Berendzen, 1999 ▶) to scale all data sets, phenix.xtriage (Zwart et al., 2005 ▶) to analyze the data for twinning and to correct any anisotropy in the data and phenix.hyss (Grosse-Kunstleve & Adams, 2003 ▶) to find potential heavy-atom or anomalously scattering atoms. Phenix.autosol carries out experimental phasing with phenix.phaser (McCoy et al., 2004 ▶, 2007 ▶) or phenix.solve (Terwilliger & Berendzen, 1999 ▶), density modification with phenix.resolve (Terwilliger, 1999 ▶) and preliminary model building using the methods in phenix.autobuild (Terwilliger, Grosse-Kunstleve, Afonine, Moriarty, Zwart et al., 2008 ▶). A key step in automated structure solution is the identification of which of several possible space-group and heavy-atom or anomalously scattering-atom substructures is correct. Phenix.autosol uses a Bayesian scoring algorithm based on analysis of the experimental electron-density maps to identify which substructures lead to the best maps (Terwilliger et al., 2009 ▶). The main features of the maps that are used in this evaluation are the skewness of the electron density (non-Gaussian histogram of density with more density in the positive tail than the negative tail) and the correlation of local r.m.s. density (large contiguous regions of high variation where the molecule is located and separate large contiguous regions of low variation where the solvent is located). Phenix.autosol is highly flexible, allowing any combination of experimental data, such as MAD + SIRAS or several SAD data sets. Although it is fully automated, the user can control nearly all aspects of the operation of the procedure, including the scoring criteria and decisions about how certain phenix.autosol should be that the correct solution is contained in the current lists of solutions. Phenix.autosol can carry out phasing using a combination of experimental SAD data and molecular-replacement information. If a molecular-replacement model is available, phenix.autosol will use phenix.phaser (McCoy et al., 2004 ▶, 2007 ▶) to complete the anomalous substructure iteratively by con­structing log-likelihood gradient maps for the anomalous scatterers based on the model of the non-anomalous structure and any anomalous scatterers that have already been found. The anomalous substructure is then used along with the model to calculate phases with phenix.phaser. Phenix.automr carries out automated likelihood-based molecular replacement using phenix.phaser (Read, 2001 ▶; McCoy et al., 2005 ▶, 2007 ▶; McCoy, 2007 ▶). The procedure is highly automated, allowing several copies of each of several components to be placed in a single run, which can also test different possible choices of space group. If there are alternative choices of model for a component, the molecular-replacement calculation can try each of them in turn or combine them as a statistically weighted ensemble. Although the evaluation of the likelihood targets is slow (Read, 2001 ▶), the use of fast approximations for the rotation search (Storoni et al., 2004 ▶) and the translation search (McCoy et al., 2005 ▶) gives run times that are competitive with traditional Patterson-based methods. Likelihood has been demonstrated to be more sensitive to the correct solution, particularly in difficult cases (Read, 2001 ▶). When there are several copies or several components to place, the ability of the likelihood functions to take advantage of preliminary partial solutions can provide a crucial increase in the signal. 7.3. Iterative model building, density modification and refinement Phenix.autobuild is a highly integrated and automated procedure for model building and model improvement through iterative model building, density modification and refinement. Phenix.autobuild uses phenix.resolve (Terwilliger, 2003a ▶,b ▶) to carry out model building, model extension, model assembly, loop fitting and building outside existing models. It further uses phenix.resolve to improve electron-density maps with statistical density modification, including information from the newly built models as well as that obtained from experiment (e.g. phenix.autosol), from NCS (Terwilliger, 2002b ▶) and from other expected features of electron-density maps such as a flat solvent (Wang, 1985 ▶), the presence of secondary-structural features (Terwilliger, 2001 ▶) and the presence of local patterns of density characteristic of macromolecules (Terwilliger, 2003c ▶). To reduce model bias in the procedure, prime-and-switch phasing can also be used (Terwilliger, 2004 ▶). Phenix.autobuild uses phenix.refine (Afonine et al., 2005 ▶) throughout this process to improve the quality of the models that are built. Phenix.autobuild provides two complementary approaches to model building. For cases in which no model or only a preliminary model has been built, phenix.autobuild will con­struct a new model considering the main chain of any supplied models as potential coordinates. In cases where a nearly final model is available, phenix.autobuild can apply a rebuild-in-place approach in which the polypeptide chain is rebuilt a few residues at a time without changing the register or the overall features of the model. The rebuild-in-place approach in phenix.autobuild provides a powerful method for the assessment of uncertainties in an atomic model by repetitive rebuilding of the model using different random seeds for each iteration (Terwilliger, Grosse-Kunstleve et al., 2007 ▶). The variability in the coordinates of each atom in the ensemble that is created is a lower bound on the uncertainty of the position of that atom. 8. Conclusions Advances in computational methods and algorithms have made it possible to automate the solution of many structures with PHENIX. However, many challenges still exist. In particular, the development of automated methods that can be applied at low resolution (worse than 3.0 Å) remains a priority. In this resolution range there are typically too few experimental data to uniquely define the macromolecular structure for automated ab initio model building. Thus, methods are required that rely on prior knowledge from existing macromolecular structures to permit productive automated data interpretation. These methods will need to be developed and applied for all stages of structure solution and tightly integrated to maximize the information extracted from the experimental data.
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            XDS

            1. Functional specification The program package XDS (Kabsch, 1988a ▶,b ▶, 1993 ▶, 2010 ▶) was developed for the reduction of single-crystal diffraction data recorded on a planar detector by the rotation method using monochromatic X-rays. It includes a set of three programs. XDS accepts a sequence of adjacent non-overlapping rotation images from a variety of imaging-plate, CCD, pixel and multiwire area detectors, infers crystal symmetry and metrics and produces a list of corrected integrated intensities of the reflections occurring in the images in a nearly automatic way. The program assumes that each image covers the same positive amount of crystal rotation and that the rotation axis, incident beam and crystal intersect at one point, but otherwise imposes no limitations on the detector position, on the directions of the rotation axis and incident beam or on the oscillation range covered by each image. XSCALE places the data sets obtained from processing with XDS on a common scale, optionally merges them into one or several sets of unique reflections and reports their completeness and the quality of the integrated intensities. It corrects the data for absorption effects, sensitivity variations in the detector plane and radiation damage. Optionally, it can correct reflections individually for radiation damage by extrapolation to their initial intensities at zero dose. XDSCONV converts reflection data files as obtained from XDS or XSCALE into various formats required by software packages for crystal structure determination. It can generate test reflections or inherit previously selected ones which are used for the calculation of a free R factor to monitor the progress of structure refinement. 2. XDS XDS is organized into eight steps (major subroutines) which are called in succession by the main program. Information is exchanged between the steps by files (see Table 1 ▶), which allows the repetition of selected steps with a different set of input parameters without rerunning the whole program. The files generated by XDS are either ASCII-type files that can be inspected and modified using a text editor or binary control images saved as a byte-offset variant of the CBFlib format (Bernstein & Hammersley, 2005 ▶; Bernstein & Ellis, 2005 ▶). Such images are indicated by the file-name extension .cbf and can be looked at using the open-source program XDS-Viewer written by Michael Hoffer. All files have a fixed name defined by XDS, which makes it mandatory to process each data set in a newly created directory in order to avoid name clashes. Clearly, one should not run more than one XDS job at a time in the same directory. Output files affected by rerunning selected steps (see Table 1 ▶) should also first be given another name if their original contents are meant to be saved. Data processing begins by copying an appropriate input file into the new directory. Input-file templates are provided with the XDS package for a number of frequently used data-collection facilities. The copied input file must be renamed XDS.INP and edited to provide the correct parameter values for the actual data-collection experiment. All parameters in XDS.INP are named by keywords containing an equals sign as the last character and many of them will be mentioned here in context in order to clarify their meaning. Execution of XDS (JOB=XDS) invokes each of the eight program steps as described below. The results and diagnostics from each step are saved in files with the extension .LP attached to the program-step name. These files should always be studied carefully to see whether processing was satisfactory or, in the case of failure, to find out what could have gone wrong. 2.1. XYCORR This program step calculates a lookup table of additive spatial corrections at each detector pixel which is stored in the files X-CORRECTIONS.cbf, Y-CORRECTIONS.cbf. Often, the data images have already been corrected for geometrical distortions, in which case XYCORR produces tables of zeros. For spiral read-out imaging-plate detectors the small corrections resulting from radial (ROFF=) and tangential (TOFF=) offset errors of the scanner are computed. For some multiwire and CCD detectors that deliver geo­metrically distorted images, corrections are derived from a calibration image (BRASS_PLATE_IMAGE=file name). This image displays the response to a brass plate containing a regular grid of holes which is mounted in front of the detector and illuminated by an X-ray point source. Clearly, the source must be placed exactly at the location to be occupied by the crystal during the actual data collection, as photons emanating from the calibration source are meant to simulate all possible diffracted beam directions. For visual control, spots that have been located and accepted from the brass-plate image by XYCORR are marked in the file FRAME.cbf. The following problems can be encountered in this step. (i) A misplaced calibration source can lead to an incorrect lookup table, impairing the correct prediction of the observed diffraction pattern in subsequent program steps. (ii) An underexposed calibration image can result in an incomplete and unreliable list of calibration spots. 2.2. INIT INIT determines three lookup tables, saved as the files BLANK.cbf, GAIN.cbf and BKGINIT.cbf, that are required by the subsequent processing steps for classifying pixels in the data images as background or belonging to a diffraction spot (‘strong’ pixels). These tables should be inspected with the XDS-Viewer program. BLANK.cbf contains a lookup table of the detector noise. It is determined from a specific image recorded in the absence of X-rays (DARK_CURRENT_IMAGE=) or is assumed to be a constant derived from the mean recorded value in each corner of the data images. GAIN.cbf codes for the expected variation of the pixel contents in the background region of a data image. The variance of the contents of a pixel in the background region is GAIN·(pixel contents − detector noise). The variance is determined from the scatter of pixel values within a rectangular box (NBX=, NBY=) of size (2·NBX + 1)·(2·NBY + 1) centred at each image pixel in succession. The table GAIN.cbf is used to distinguish background pixels from ‘strong’ pixels that are part of a diffraction spot. BKGINIT.cbf estimates the initial background at each pixel from a few data images specified by the input parameter BACKGROUND_RANGE=. The lookup table is obtained by adding the X-ray background from each image. Shaded regions on the detector (i.e. from the beamstop), pixels outside a user-defined circular region (TRUSTED_REGION=) or pixels with an undefined spatial correction value are classified as untrustworthy and marked by −3. The following problem can be encountered in this step. Some detectors with insufficient protection from electromagnetic pulses may generate badly spoiled images whose inclusion leads to a completely wrong X-ray background table. These images can be identified in INIT.LP by their un­expected high mean pixel contents and this step should be repeated with a different set of images. 2.3. COLSPOT COLSPOT locates strong diffraction spots occurring in a subset of the data images and saves their centroids in the file SPOT.XDS. The data subset is defined by contiguous image number ranges, where each range is specified by the keyword SPOT_RANGE=. As described in Kabsch (2010 ▶), spots are defined as sets of ‘strong’ pixels that are adjacent in three dimensions. The classification of ‘strong’ pixels is controlled by the decision constants STRONG_PIXEL= and BACKGROUND_PIXEL=. If the total number of ‘strong’ pixels occurring in the specified data images exceeds the upper limit as given by the input parameter MAXIMUM_NUMBER_OF_STRONG_PIXELS=, the weaker ones are discarded. A spot is accepted if it contains a minimum number of ‘strong’ pixels (MINIMUM_NUMBER_OF_PIXELS_IN_A_SPOT=) and if the spot centroid is sufficiently close to the location of the strongest pixel in the spot (SPOT_MAXIMUM-CENTROID=). The following problem can be encountered in this step. Sharp edges such as ice rings in the images can lead to an excessive number of ‘strong’ pixels being erroneously classified as contributing to diffraction spots. These aliens could prevent IDXREF from recognizing the crystal lattice. 2.4. IDXREF IDXREF uses the initial parameters describing the diffraction experiment as provided by XDS.INP and the observed centroids of the spots from the file SPOT.XDS to find the orientation, metric and symmetry of the crystal lattice and refines all or a specified subset of these parameters [input parameter REFINE(IDXREF)=] . On return, the complete set of parameters are saved in the file XPARM.XDS and the original file SPOT.XDS is replaced by a file of identical name, now with indices attached to each observed spot. Spots not belonging to the crystal lattice are given indices 0, 0, 0. XDS considers the run to be successful if the coordinates of at least 70% of the given spots can be explained with reasonable accuracy (input parameter MAXIMUM_ERROR_OF_SPOT_POSITION=); otherwise, XDS will stop with an error message. Alien spots often arise because of the presence of ice or small satellite crystals and continuation of data processing may still be meaningful. In this case, XDS is called again with an explicit list of the subsequent steps specified in XDS.INP (input parameter JOB=DEFPIX XPLAN INTEGRATE CORRECT). IDXREF uses the methods described in Kabsch (1993 ▶, 2010 ▶) to determine a crystal lattice that explains the observed locations of the diffraction spots listed in the file SPOT.XDS. Firstly, a reciprocal-lattice vector referring to the unrotated crystal is computed from each observed spot centroid. Differences between any two reciprocal-lattice vectors that are above a specified minimal length (SEPMIN=) are accumulated in a three-dimensional histogram. These difference vectors will form clusters in the histogram, since there are many different pairs of reciprocal-lattice vectors of nearly identical vector difference. The clusters are found as maxima in the smoothed histogram (CLUSTER_RADIUS=) and a basis of three linearly independent cluster vectors is selected that allows all other cluster vectors to be expressed as nearly integral multiples of small magnitude with respect to this basis. The basis vectors and the 60 most populated clusters with attached indices are listed in IDXREF.LP. If many of the indices deviate significantly from integral values, the program is unable to find a reasonable lattice basis and all further processing will be meaningless. If the space group and unit-cell parameters are specified, a reduced cell is derived and the reciprocal-basis vectors found above are reinterpreted accordingly; otherwise, a reduced cell is determined directly from the reciprocal basis. The parameters of the reduced cell, the coordinates of the reciprocal-basis vectors and their indices with respect to the reduced cell are reported. Based on the orientation and metric of the reduced cell now available, IDXREF indexes up to 3000 of the strongest spots using the local-indexing method. This method considers each spot as a node of a tree and identifies the largest subtree of nodes which can be assigned reliable indices. The number of reflections in the ten largest subtrees is reported and usually shows a dominant first tree corresponding to a single lattice, whereas alien spots are found in small subtrees. Reflections in the largest subtree are used for initial refinement of the basis vectors of the reduced cell, the incident-beam wavevector and the origin of the detector, which is the point in the detector plane nearest to the crystal. Experience has shown that the detector origin and the direction of the incident beam are often specified with insufficient accuracy, which could easily lead to a misindexing of the reflections by a constant offset. For this reason, IDXREF considers alternative choices for the index origin and reports their likelihood of being correct. The parameters controlling the local indexing are INDEX_ERROR=, INDEX_MAGNITUDE=, INDEX_QUALITY= (corresponding to ∊, ϕ and 1 − ℓmin in Kabsch, 2010 ▶) and INDEX_ORIGIN=h 0, k 0, l 0, which is added to the indices of all reflections in the tree. After initial refinement based on the reflections in the largest subtree, all spots which can now be indexed are included. Usually, the detector distance and the direction of the rotation axis are not refined, but if the spots were extracted from images covering a large range of total crystal rotation then better results are obtained by including these parameters in the refinement [REFINE(IDXREF)=] . The refined metric parameters of the reduced cell are used to test each of the 44 possible lattice types as described in Kabsch (2010 ▶). For each lattice type, IDXREF reports the likelihood of its being correct and the conventional unit-cell parameters. The program step concludes with an overview of possible lattice symmetries, but makes no automatic decision for the space group. If the crystal symmetry is unknown, XDS will continue data processing with the crystal being described by its reduced-cell basis vectors and triclinic symmetry. Space-group assignment is postponed to the last program step, CORRECT, when integrated intensities are available. The following problems can be encountered in this step. (i) The indices of many difference-vector clusters deviate significantly from integral values. This can be caused by incorrect input parameters, such as rotation axis, oscillation angle or detector position, by a large fraction of alien spots in SPOT.XDS, by placing the detector too close to the crystal or by an inappropriate choice of the parameters SEPMIN= and CLUSTER_RADIUS= in densely populated images. (ii) Indexing and refinement is unsatisfactory despite well indexed difference-vector clusters. This is probably caused by the selection of an incorrect index origin and IDXREF should be rerun with plausible alternatives for INDEX_ORIGIN= after a visual check of a data image with XDS-Viewer. (iii) Despite successful indexing and refinement, IDXREF stops with the error message INSUFFICIENT PERCENTAGE OF INDEXED REFLECTIONS, complaining that less than 70% of the given spots could be explained. Alien spots often arise because of the presence of ice or small satellite crystals and continuation of data processing may still be meaningful. To continue data processing, just specify the missing processing steps in XDS.INP by JOB=DEFPIX XPLAN INTEGRATE CORRECT and call XDS again. 2.5. DEFPIX DEFPIX recognizes regions in the initial background table (file BKGINIT.cbf) that are obscured by intruding hardware and marks the shaded pixels as untrusted. In addition, pixels that are outside a user-defined resolution range (INCLUDE_RESOLUTION_RANGE=) are marked and eliminated from the trusted region. The marked background table that is thus obtained is saved in the file BKGPIX.cbf which is needed by the subsequent program steps. To recognize the obscured regions in the initial background, DEFPIX generates a control image (file ABS.cbf) that contains values of around 10 000 for unshaded pixels and lower values for shaded pixels. The classification of the pixels into reliable and untrusted pixels is based on the two input parameters VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS= (default 6000 30 000) and INCLUDE_RESOLUTION_RANGE= (default 20.0 0.0). Pixels in the table ABS.cbf with a value outside the ranges specified by the two parameters are marked unreliable (by −3) in the background table BKGPIX.cbf. The following problem can be encountered in this step. If the parameter VALUE_RANGE_FOR_TRUSTED_DETECTOR_PIXELS= specifies a value range that is too narrow, ‘good’ regions will erroneously be excluded from the trusted detector region. Check BKGPIX.cbf with the XDS-Viewer program and if necessary repeat the DEFPIX step with more appropriate values. 2.6. XPLAN XPLAN supports the planning of data collection. It is based upon information provided by the input files XPARM.XDS and BKGPIX.cbf, both of which become available on processing a few test images with XDS. XPLAN estimates the completeness of new reflection data expected to be collected for each given starting angle and total crystal rotation and reports the results for a number of selected resolution shells in the file XPLAN.LP. To minimize the recollection of data, the name of a file containing already measured reflections can be provided by the input parameter REFERENCE_DATA_SET=. The following problems can be encountered in this step. (i) Incorrect results may occur for some space groups, i.e. P42, if the unit cell determined by XDS from processing a few test images implicates reflection indices that are inconsistent with those from the reference data set. However, the correct cell choice can be found by using the old data as a reference and repeating CORRECT with the appropriate reindexing transformation, followed by copying GXPARM.XDS to XPARM.XDS. The same applies if IDXREF was run for an unknown space group and then reindexed in CORRECT. (ii) XPLAN ignores potential reflection overlap owing to the finite oscillation range covered by each image. 2.7. INTEGRATE INTEGRATE determines the intensity of each reflection predicted to occur in the rotation data images (DATA_RANGE=) and saves the results in the file INTEGRATE.HKL. The diffraction parameters needed to predict the reflection positions are initially provided by the file XPARM.XDS. These parameters are either kept constant or refined periodically using strong diffraction spots encountered in the data images. Whether refinement should be carried out at all and which parameters are to be refined can be specified by the user [input parameter REFINE(INTEGRATE)=]. The centroids of the strong spots in the data images are computed from pixels that exceed the background by a given multiple of standard deviations (input parameters SIGNAL_PIXEL=, BACKGROUND_PIXEL=). Strong spots are used in the refinement if their centroids are reasonably close to their calculated position (input parameter MAXIMUM_ERROR_OF_SPOT_POSITION=). For determination of the intensity, approximate values describing the extension and the form of the diffraction spot must be specified. The shapes of all spots become very similar when the contents of each of their contributing image pixels is mapped onto a three-dimensional coordinate system, specific for each reflection, which has its origin on the surface of the Ewald sphere at the terminus of the diffracted beam wavevector (see Kabsch, 2010 ▶). The transformed spot can roughly be described as a Gaussian involving two parameters: the standard deviations of the reflecting range σM (input parameter REFLECTING_RANGE_E.S.D.=σM) and the beam divergence σD (input parameter BEAM_DIVERGENCE_E.S.D.=σD). This leads to an integration region around the spot that is defined by the parameters δM (REFLECTING_RANGE=) and δD (BEAM_DIVERGENCE=), which are typically chosen to be 6–10 times larger than σM and σD, respectively. Appropriate values for these parameters are determined automatically by XDS (Kabsch, 2010 ▶); the user has the option to override the automatic assignments. Integration is carried out by a two-step procedure. In the first pass, spot templates are generated by superimposing the profiles of strong reflections after their mapping to the Ewald sphere. Grid points with a value above a minimum percentage of the maximum in the template (parameter CUT=) are marked for inclusion in the final integration. To allow for variations in their shape, profile templates are generated from reflections located at nine regions of equal size covering the detector surface and additional sets of nine to cover equally sized (parameter DELPHI=) batches of images. The actual integration is carried out in the second pass by profile fitting with respect to the spot shape determined in the first pass. Incomplete reflections below a minimum percentage of the observed reflection intensity (parameter MINPK=) will be discarded. Otherwise, the missing intensity is estimated from the learned reflection profiles. On return from the INTEGRATE step, all spots expected to occur in the last data image are encircled and the modified image is saved as the file FRAME.cbf for inspection. The following problems can be encountered in this step. (i) Off-centred profiles indicate incorrectly predicted reflection positions by using the parameters provided by the file XPARM.XDS (i.e. misindexing by using a wrong origin of the indices), crystal slippage or change in the incident-beam direction. (ii) Profiles extending to the borders of the box indicate too-small values of the parameters BEAM_DIVERGENCE= or REFLECTING_RANGE=. This leads to incorrect integrated intensities because of truncated reflection profiles and un­reliable background determination. (iii) Display of the file FRAME.cbf shows spots which are not encircled. If these unexpected reflections are not close to the spindle and are not ice reflections, then it is likely that the parameters provided by the file XPARM.XDS are wrong. 2.8. CORRECT CORRECT applies correction factors to the intensities and standard deviations of all reflections found in the file INTEGRATE.HKL, determines the space group if unknown and refines the unit-cell parameters, reports the quality and com­pleteness of the data set and saves the final integrated intensities in the file XDS_ASCII.HKL. Some of the employed algorithms are new and are described in Kabsch (2010 ▶). CORRECT accepts reflections from the file INTEGRATE.HKL that are (i) recorded (parameter MINPK=) on specified images (parameter DATA_RANGE=); (ii) within a given resolution range (parameter INCLUDE_RESOLUTION_RANGE=); (iii) outside ice rings (parameter EXCLUDE_RESOLUTION_RANGE=); (iv) not overloaded (parameter OVERLOAD=); and (v) not marked for exclusion in the file REMOVE.HKL. Thus, the user has the option to exclude unreliable reflections from the final data set by repeating the CORRECT step with appropriate parameter values. The intensities of the accepted reflections are first corrected for effects arising from polarization of the incident beam (parameters FRACTION_OF_POLARIZATION=, POLARIZATION_PLANE_NORMAL=) and absorption effects (parameters AIR=, SILICON=, SENSOR_THICKNESS=) arising from differences in path lengths of the diffracted beam. These corrections do not depend on knowledge of the space group. The integrated intensities of the reflections in the file INTEGRATE.HKL may or may not have been indexed in the correct space group; for the purpose of integration, it is important only that all reflections occurring in the data images have been indexed with respect to some unit-cell basis and that their locations on the images were hit exactly. The correct reflection indices in the true space group are always a linear transformation of the original indices used in INTEGRATE.HKL. All lattices consistent with the locations of the reflections saved in INTEGRATE.HKL (decision parameters MAX_CELL_AXIS_ERROR=, MAX_CELL_ANGLE_ERROR=) and their corresponding linear transformations are printed to provide a useful overview similar to that shown in IDXREF.LP. If the space group is not specified, XDS proposes one of the enantiomorphous space groups without screw axes that is compatible with the observed lattice symmetry and explains the intensities of a subset of the reflections (parameter TEST_RESOLUTION_RANGE=) at an acceptable R meas (Diederichs & Karplus, 1997 ▶; Weiss, 2001 ▶) using a minimum number of unique reflections. The criteria for an acceptable R meas are controlled by the decision parameters MIN_RFL_Rmeas= and MAX_FAC_Rmeas=. The user can always override the automatic decisions by specifying the correct space-group number (parameter SPACE_GROUP_NUMBER=) and unit-cell parameters (parameter UNIT_CELL_CONSTANTS=) in XDS.INP and repeating the CORRECT step. This provides a simple way to rename orthorhombic unit-cell parameters, which often becomes necessary if screw axes are present. In addition, the user has the option to specify the following in XDS.INP: (i) a reference data set (parameter REFERENCE_DATA_SET=), (ii) a reindexing transformation (parameter REIDX=) and (iii) three basis vectors if known from processing a previous data set taken at the same crystal orientation in a multi-wavelength experiment (parameters UNIT_CELL_A-AXIS=, UNIT_CELL_B-AXIS=, UNIT_CELL_C-AXIS=). The possibility of comparing the new data with a reference data set is particularly useful for resolving the issue of alternative settings of polar or rhombohedral cells (such as P4, P6 and R3). Also, reference data are quite useful for recognizing misindexing or for testing potential heavy-atom derivatives. For refinement of the unit-cell parameters [parameter REFINE(CORRECT)=], CORRECT uses a subset of the accepted reflections whose observed centroid is sufficiently close to the predicted spot position (parameter MAXIMUM_ERROR_OF_SPOT_POSITION=). The refined set of parameters is saved in the file GXPARM.XDS, which has an identical layout to the file XPARM.XDS produced by IDXREF. If the crystal has not slipped during data collection, these parameters are quite accurate. Other correction factors (parameter CORRECTIONS=) which partially compensate for radiation damage, absorption effects and variations in the sensitivity of the detector surface are determined from the symmetry-equivalent reflections usually found in the data images. The corrections are chosen such that the integrated intensities of symmetry-equivalent reflections come out as similar as possible. The user may control application of the various corrections by specifying the parameter CORRECTIONS= by a combination of the key­words DECAY MODULATION ABSORPTION. Whether Friedel pairs are considered as symmetry-equivalent reflections in the calculation of the correction factors depends on the values of the two parameters STRICT_ABSORPTION_CORRECTION= and FRIEDEL’S_LAW=. The number of correction factors is controlled by the input parameters MINIMUM_I/SIGMA=, NBATCH= and REFLECTIONS/CORRECTION_FACTOR=. The residual scatter in intensity of symmetry-equivalent reflections is used to estimate their standard deviations. Here, the initial estimate v 0(I) (obtained from the INTEGRATE step) for the variance of the reflection intensity I is replaced by v(I) = a[v 0(I) + bI 2]. The two constants a and b are chosen to minimize discrepancies between v(I) and the variance estimated from sample statistics of symmetry-related reflections. Based on the more realistic error estimates for the intensities, outliers are recognized by comparison with other symmetry-equivalent reflections. These outliers are included in the main output file XDS_ASCII.HKL, in which they are marked by a negative sign attached to the estimated standard deviations of their intensity. Classification of a reflection as a misfit is con­trolled by a decision constant which has the default value of WFAC1=1.5. Specification of a lower value such as WFAC1=1.0 by the user will lead to an increasing number of misfits and lower R factors as outliers are not included in the reported statistics. Data quality as a function of resolution is described by the agreement of intensities of symmetry-related reflections and quantified by the R factors R merge and the more robust indicator R meas (Diederichs & Karplus, 1997 ▶; Weiss, 2001 ▶). These R factors as well as the intensities of all reflections with indices of type h00, 0k0 and 00l and those expected to be systematically absent provide important information for identification of the correct space group. Clearly, large R factors or many rejected reflections or large observed intensities for reflections that are expected to be systematically absent suggest that the assumed space group or indexing is incorrect. The presence or absence of anomalous scatterers is specified by the parameter FRIEDEL’S_LAW=. Finally, CORRECT analyzes the distribution of reflection intensities as a function of their resolution and reports outliers from the Wilson plot. Often, these aliens arise from ice rings in the data images. To suppress the un­wanted reflections from the final output file XDS_ASCII.HKL, the user copies them to a file named REMOVE.HKL in the current directory and repeats the CORRECT step. The following problems can be encountered in this step. (i) Incomplete data sets may lead to wrong conclusions about the space group, as some of its symmetry operators might not be involved in the R-factor calculations. (ii) Often, the CORRECT step is repeated several times. It should be remembered that XDS overwrites earlier versions of the output files XDS_ASCII.HKL, GXPARM.XDS etc. 3. XSCALE The scaling program XSCALE (i) puts one or more files obtained from data processing with XDS on a common scale and reports the completeness and quality of the data sets; (ii) offers a choice of either combining symmetry-equivalent observations into a single unique reflection or saving the scaled but unmerged observations in the output file; (iii) allows several output files that are placed on the same scale, a feature that is recommended for MAD data sets taken from the same crystal at different wavelengths; (iv) determines correction factors that partially compensate for absorption effects, sensitivity variations in the detector plane and radiation damage; and (v) can correct reflections individually for radiation damage (Diederichs et al., 2003 ▶). The program uses a new fast algorithm (Kabsch, 2010 ▶) and imposes no limitations on the number of data sets or scaling/correction factors. The easiest way to run XSCALE is to copy a template input file named XSCALE.INP to a new directory and to replace the parameter values by the appropriate values describing the actual scaling run. The input parameters may be given in arbitrary order, except for the parameters defining the input and output reflection files (INPUT_FILE=, OUTPUT_FILE=). Here, an output file is defined first by the parameter OUTPUT_FILE= that will include the scaled and merged reflections from all following input files specified by the parameters INPUT_FILE= until the next occurrence of OUTPUT_FILE= in XSCALE.INP. An arbitrary number of output files can be specified (together with their set of input files) in a single run of XSCALE. All output files are then on the same scale, which is a useful program feature for MAD data sets. The reflections in each output file will be unmerged and Friedel pairs will be considered to be different if this holds for all of the input data sets unless explicitly redefined by the parameters MERGE= and FRIEDEL’S_LAW=. Moreover, each output file accepts an additional parameter that controls how the Friedel pairs of the input files are treated in the calculation of the absorption correction factors. If STRICT_ABSORPTION_CORRECTION=FALSE, Friedel pairs are treated as symmetry-equivalent reflections in these calculations, which could lead to an underestimate of the anomalous differences in the presence of anomalous scatterers. Friedel pairs are only treated as different reflections in the calculations if STRICT_ABSORPTION_CORRECTION=TRUE and FRIEDEL’S_LAW=FALSE. For each input file, a resolution window for accepting reflections (INCLUDE_RESOLUTION_RANGE=), the extent of absorption corrections (CORRECTIONS=DECAY MODULATION ABSORPTION) and the number of correction factors (NBATCH=) can be specified. Finally, each input data set can be corrected for radiation damage by specifying the name of the crystal the data set was obtained from (CRYSTAL_NAME=). Specification of this parameter implicates zero-dose extrapolation of individual reflection intensities to compensate for the effects of radiation damage experienced by the crystal so far (see Diederichs et al., 2003 ▶). Each resulting scaled data set is of XDS_ASCII format. It can be converted into a CCP4-style multi-record MTZ file using the copy feature of the program POINTLESS (Evans, 2006 ▶) available from the web (ftp://ftp.ccp4.ac.uk/ccp4/6.0.2/prerelease/pointless.html) or converted by XDSCONV into the format required by various structure-solution packages. 4. XDSCONV XDSCONV accepts reflection-intensity data files as produced by XSCALE or CORRECT and converts them into the format required by software packages for structure determination. XDSCONV estimates structure-factor moduli based on the assumption that the intensity data set obeys Wilson’s distribution and uses a Bayesian approach to statistical inference as described by French & Wilson (1978 ▶). The output file generated may inherit the test reflections previously used to calculate a free R factor (Brünger, 1992 ▶) or may contain new test reflections selected by XDSCONV. 5. Parallelization of XDS In order to efficiently use modern multiprocessor hardware, a major effort has been undertaken to replace the original code of XDS by routines that can run concurrently with very little need for synchronization. As described above, data processing by XDS is organized into eight steps that must be executed in a fixed order since the result of each step is needed as input for the subsequent ones. Thus, the only way to speed up processing is to make each step faster. The most computationally intensive steps are COLSPOT and INTEGRATE and, to a lesser degree, the routine that refines diffraction parameters in IDXREF and CORRECT. Thus, the highest savings in wall clock time are expected to result from changing these routines so that each one can make efficient use of the multiprocessor hardware. Two methods can be used (simultaneously) to speed up data processing. In the first method, XDS divides the set of data images into approximately equal portions, calls a shell script that starts an independent job for processing each portion of images by the computer cluster and waits until all jobs have finished. The number of such independent jobs can be limited by the user (MAXIMUM_NUMBER_OF_JOBS=); up to 99 jobs are allowed. This method works even if the processors do not share the same address space since the jobs are independent processes that do not communicate at all. The second method uses OpenMP to control execution by a team of threads and relies on a shared-memory multiprocessor platform. This allows the program to exploit data parallelism at a more fine-grained level to speed up refinements and routines for setting up and solving systems of linear equations. The maximum number of threads that can be employed by the parallel version of XDS (xds_par, xscale_par) can be limited by the user (MAXIMUM_NUMBER_OF_PROCESSORS=); up to 32 processors can be used. OpenMP has been chosen for execution control because it hardly adds to the complexity of the program code and most importantly does not require the maintenance of separate versions of the source code depending on whether the program is intended for execution by a team of processors or just by a single CPU. Moreover, OpenMP has become the de facto standard and compilers accepting OpenMP directives are available for most shared-memory multiprocessor platforms. The new version of COLSPOT comprises an initial part, a concurrent procedure and a final part. After initialization each available processor is kept busy analyzing its share of rotation images for strong pixels, which are saved in a processor-specific file. In the final sequential part of COLSPOT all files resulting from the concurrent computations are read and the location addresses, image running numbers and signal values of the strong pixels are stored in a hash table. Strong pixels belonging to the same spot can be located rapidly in this table and the centroids of the spots are saved in the final output file from this step. For the INTEGRATE step, the rotation images are divided into approximately equal portions for independent processing under control by a shell script according to the first method described above. When all jobs have finished, the integrated intensities from each independent file are joined. Minor problems could occur for reflections that receive intensity contributions from images that have been processed by different jobs. Compared with processing as a single job, the observed intensity differences are small and disappear if the different jobs use identical reference profiles and diffraction parameters to predict spot locations [to avoid refinements, specify REFINE(INTEGRATE)=!]. In addition, each of the independent jobs can be executed by a team of processors controlled by OpenMP. The rotation images analyzed by each job are split into a sequence of batches of consecutive images that cover a total rotation range that is large enough to accommodate the integration domain. The batches are evaluated in strictly sequental order; parallel processing is confined to images within each batch. The restructured routine for the INTEGRATE step consists of code regions for parallel execution interspersed by sequential sections. After initialization, strong reflections and their mean size and extent are determined concurrently. The diffraction parameters are refined in parallel processing mode based on the observed spot locations. In the following sequential section a database is generated containing information about all reflections occurring in this batch of images. A subset of strong reflections is also identified that is useful for the subsequent reflection-profile learning pass. The mean profile of these reflections is determined concurrently in a second pass through the images in the batch. Reflection integration by profile fitting is carried out in parallel in the third cycle through the batch. In the final sequential step the results from each job, which have been saved in files, are harvested and intensity contributions to the same reflection from adjacent batches are merged. 6. Availability Documentation and executable versions of the XDS package for widely used computer systems running under Linux or OSX can be obtained from the XDS homepage (http://xds.mpimf-heidelberg.mpg.de/) free of charge for use by academics for noncommercial applications. Additional information can be found at http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/XDS. For looking at rotation data images and control images generated by XDS, an open-source program XDS-Viewer written by Michael Hoffer can be obtained from http://xds-viewer.sourceforge.net under the GNU General Public License. A graphical interface XDSi (Kursula, 2004 ▶) is available (http://cc.oulu.fi/~pkursula/xdsi.html) that simplifies the operation of XDS.
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              REFMAC5 for the refinement of macromolecular crystal structures

              1. Introduction As a final step in the process of solving a macromolecular crystal (MX) structure, refinement is carried out to maximize the agreement between the model and the X-ray data. Model parameters that are optimized in the refinement process include atomic coordinates, atomic displacement parameters (ADPs), scale factors and, in the presence of twinning, twin fraction(s). Although refinement procedures are typically designed for the final stages of MX analysis, they are also often used to improve partial models and to calculate the ‘best’ electron-density maps for further model (re)building. Refinement protocols are therefore an essential component of model-building pipelines [ARP/wARP (Perrakis et al., 1999 ▶), SOLVE/RESOLVE (Terwilliger, 2003 ▶) and Buccaneer (Cowtan, 2006 ▶)] and are of paramount importance in guiding manual model updates using molecular-graphics software [Coot (Emsley & Cowtan, 2004 ▶), O (Jones et al., 1991 ▶) and XtalView (McRee & Israel, 2008 ▶)]. The first software tools for MX refinement appeared in the 1970s. Real-space refinement using torsion-angle parameterization was introduced by Diamond (1971 ▶). This was followed a few years later by reciprocal-space algorithms for the refinement of individual atomic parameters with added energy (Jack & Levitt, 1978 ▶) and restraints (Konnert, 1976 ▶) in order to deliver chemically reasonable models. The energy and restraints approaches differ only in terminology as they use similar information and both can be unified using a Bayesian formalism (Murshudov et al., 1997 ▶). Early programs used the well established statistical technique of least-squares residuals with equal weights on all reflections (Press et al., 1992 ▶), with gradients and second derivatives (if needed) calculated directly. This changed when Fourier methods, which were developed for small-molecule structure refinement (Booth, 1946 ▶; Cochran, 1948 ▶; Cruickshank, 1952 ▶, 1956 ▶), were formalized for macromolecules (Ten Eyck, 1977 ▶; Agarwal, 1978 ▶). The use of the FFT for structure-factor and gradient evaluation (Agarwal, 1978 ▶) sped up calculations dramatically and the refinement of large molecules using relatively modest computers became realistic. Later, the introduction of molecular dynamics (Brünger, 1991 ▶), the generalization of the FFT approach for all space groups (Brünger, 1989 ▶) and the development of a modular approach to refinement programs (Tronrud et al., 1987 ▶) dramatically changed MX solution procedures. Also, the introduction of the very robust and popular small-molecular refinement program SHELXL (Sheldrick, 2008 ▶) to the macromolecular community allowed routine analysis of high-resolution MX data, including the refinement of merohedral and non-merohedral twins. More sophisticated statistical approaches to MX structure refinement started to emerge in the 1990s. Although the basic formulations and most of the necessary probability distributions used in crystallography were developed in the 1950s and 1960s (Luzzati, 1951 ▶; Ramachandran et al., 1963 ▶; Srinivasan & Ramachandran, 1965 ▶; see also Srinivasan & Parthasarathy, 1976 ▶, and references therein), their implementation for MX refinement started in the middle of the 1990s (Pannu & Read, 1996 ▶; Bricogne & Irwin, 1996 ▶; Murshudov et al., 1997 ▶). It should be emphasized that prior to the application of maximum-likelihood (ML) techniques in MX refinement, the importance of advanced statistical approaches to all stages of MX analysis had been advocated by Bricogne (1997 ▶) for two decades. Nowadays, most MX refinement programs offer likelihood targets as an option. Although ML can be very well approximated using the weighted least-squares approach in the very simple case of refinement against structure-factor amplitudes (Murshudov et al., 1997 ▶), ML has the attractive advantage that it is relatively easy (at least theoretically) to generalize for the joint utilization of a variety of sources of observations. For example, it was immediately extended to use experimental phase information (Bricogne, 1997 ▶; Murshudov et al., 1997 ▶; Pannu et al., 1998 ▶). In the last two decades, there have been many developments of likelihood functions towards the exploitation of all available experimental data for refinement, thus increasing the reliability of the refined model in the final stages of refinement and improving the electron density used in model building in the early stages of MX analysis (Bricogne, 1997 ▶; Skubák et al., 2004 ▶, 2009 ▶). MX crystallography can now take advantage of highly optimized software packages dealing with all of the various stages of structure solution, including refinement. There are several programs available that either are designed to perform refinement or offer refinement as an option. These include BUSTER/TNT (Blanc et al., 2004 ▶), CNS (Brünger et al., 1998 ▶), MAIN (Turk, 2008 ▶), MOPRO (Guillot et al., 2001 ▶), phenix.refine (Adams et al., 2010 ▶), REFMAC5 (Murshudov et al., 1997 ▶), SHELXL (Sheldrick, 2008 ▶) and TNT (Tronrud et al., 1987 ▶). While MOPRO was specifically designed for niche ultrahigh-resolution refinement and is able to model deformation density, all of the other programs can deal with a multitude of MX refinement problems and produce high-quality electron-density maps, although with different emphases and strengths. This contribution describes the various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite (Collaborative Computational Project, Number 4, 1994 ▶). REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum that is encountered in macromolecular crystallo­graphy. 2. Target functions in REFMAC5 As in all other refinement programs, the target function minimized in REFMAC5 has two components: a component utilizing geometry (or prior knowledge) and a component utilizing experimental X-ray knowledge, where f total is the total target function to be minimized, con­sisting of functions controlling the geometry of the model and the fit of the model parameters to the experimental data, and w is a weight between the relative contributions of these two components. In macromolecular crystallography, the weight is traditionally selected by trial and error. REFMAC5 offers automatic weighting, which is based on the fact that both components are the natural logarithm of a probability distribution. However, this ‘automatic’ weight may lead to unreasonable deviations from ideal geometry (either too tight or too relaxed) in some cases, as the ideal geometry is difficult to describe statistically. For these cases, the weight parameter may need to be selected manually to produce more reasonable geometry, e.g. such that the root-mean-square deviation of the bond lengths from the ideal values is 0.02 Å and at resolutions lower than 3 Å perhaps even smaller. From a Bayesian viewpoint (O’Hagan, 1994 ▶), these functions have the following probabilistic interpretation (ignoring constants which are irrelevant for minimization purposes): From this point of view, MX refinement is similar to a well known technique in statistical analysis: maximum posterior (MAP) estimation. The model parameters are linked with the experimental data via f xray, i.e. likelihood is a mechanism that controls information flow from the experimental data to the derived model. Consequently, it is important to design a likelihood function that allows optimal information transfer from the data to the derived model. f geom ensures that the derived model is consistent with the presumed chemical and structural knowledge. This function plays the role of regularization, reduction of the effective number of parameters and transfer of known information to the new model. If care is not taken, then wrong information may be transferred to the model; removing the effect of such errors may be difficult if possible at all. The design of such functions should be performed using verifiable invariant information and it should be testable and revisable during the refinement and model-building procedures. Functions dealing with geometry usually depend only on atomic parameters. We are not aware of any function used in crystallography that deals with the prior geometry probability distributions of overall parameters. A possible reason for the lack of interest in (and necessity of) this type of function may be that, despite popular belief, the statistical problem in crystallo­graphy is sufficiently well defined and that the main problems are those of model parameterization and completion. The existing refinement programs differ in the target functions and optimization techniques used to derive model parameters. Most MX programs use likelihood target functions. However, their form, implementations and parameterizations are different. Therefore, it should not come as a surprise if different programs give (slightly) different results in terms of model parameters, electron-density maps and reliability factors (such as R and R free). 2.1. X-ray component The X-ray likelihood target functions used in REFMAC5 are based on a general multivariate probability distribution of E observations given M model structure factors. This function is derived from a multivariate complex Gaussian distribution of N = E + M structure factors for acentric reflections and from a multivariate real Gaussian distribution for centric reflections and has the following form: where P = P(|F 1|, …, |FE |; F E+1, …, FN ), Fi = |Fi |exp(ια i }, |F 1|, …, |FE | denote the observed amplitudes, F E+1, …, FN are the model structure factors, C N is the covariance matrix with the elements of its inverse denoted by aij , C M is the bottom right square submatrix of C N of dimension M with the elements of its inverse denoted by cij . We define cij = 0 for i ≤ 0 or j ≤ 0. |C N | and |C M | are the determinants of matrices C N and C M , = (α1, …, α E ) is the vector of the unknown phases of the observations that need to be integrated and is a probability distribution expressing any prior knowledge about the phases. In the simplest case of one observation, one model and no prior knowledge about phases, the integral in (3) can be evaluated analytically. In this case, the function follows a Rice distribution (Bricogne & Irwin, 1996 ▶), which is a non-central χ2 distribution of |F o|2/Σ and |F o|2/2Σ with non-centrality parameters D 2|F c|2/Σ and D 2|F o|2/2Σ with one and two degrees of freedom for centric and acentric reflections, respectively (Stuart & Ord, 2009 ▶), where D in its simplest interpretation is 〈cos(Δxs)〉, a Luzzati error parameter (Luzzati, 1952 ▶) expressing errors in the positional parameters of the model, F c is the model structure factor, |F o| is the observed amplitude of the structure factor and Σ is the uncertainty or the second central moment of the distribution. Both Σ and D enter the equation as part of the covariance matrices C N and C M from (3). Σ is a function of the multiplicity of the Miller indices (∊ factor), experimental uncertainties (σo), model completeness and model errors. For simplicity, the following parameterization is used: The current version of REFMAC5 estimates D and Σmod in resolution bins. Working reflections are used for estimation of D and free reflections are used for Σmod estimation. Although this simple parameterization works in many cases, it may give misleading results for data from crystals with pseudo translation, OD disorder or modulated crystals in general. Currently, there is no satisfactory implementation of the error model to account for these cases. 2.2. Incorporation of experimental phase information in model refinement 2.2.1. MLHL likelihood MLHL likelihood (Bricogne, 1997 ▶; Murshudov et al., 1997 ▶; Pannu et al., 1998 ▶) is based on a special case of the probability distribution (3) where we have one observation, one model and phase information derived from an experiment available as a prior distribution P pr(α), where F o = |F o|exp(ια), F c = |F c|exp(ιαc), α is the unknown phase of the structure factor and α1 and α2 are its possible values for a centric reflection. The prior phase probability distribution P pr(α) is usually represented as a generalized von Mises distribution (Mardia & Jupp, 1999 ▶) and is better known in crystallography as a Hendrickson–Lattman distribution (Hendrickson & Lattman, 1970 ▶), where A, B, C and D are coefficients of the Fourier transformation of the logarithm of the phase probability distribution and N is the normalization coefficient. The distribution is unimodal when C and D are zero; otherwise, it is a bimodal distribution that reflects the possible phase uncertainty in experimental phasing. For centric reflections C and D are zero. 2.2.2. SAD/SIRAS likelihood The MLHL likelihood is dependent on the reliability and accuracy of the prior distribution P pr(α). However, the phase distributions after density modification (or even after phasing), which are usually used as P pr(α), often suffer from inaccurate estimation of the phase errors. Furthermore, MLHL [as well as any other special case of (3) with a non-uniform P pr(α)] assumes independence of the prior phases from the model phases. These shortcomings can be addressed by using experimental information directly from the experimental data, instead of from the P pr(α) distributions obtained in previous steps of the structure-solution process. Currently, SAD and SIRAS likelihood functions are implemented in REFMAC5. The SAD probability distribution (Skubák et al., 2004 ▶) is obtained from (3) by setting E = 2, M = 2, P pr(α) = constant and |F 1| = |F o +|, |F 2| = |(F o −)*|, F 3 = F c +, F 4 = (F c −)*, where F + and F − are the structure factors of the Friedel pairs. The model structure factors are constructed using the current parameters of the protein, the heavy-atom substructure and the inputted anomalous scattering parameters. Similarly, the SIRAS function (Skubák et al., 2009 ▶) is a special case of (3) with E = 3, M = 3, P pr(α) = constant and |F 1| = |F o N |, |F 2| = |F o +|, |F 3| = |(F o −)*|, F 4 = F c N, F 5 = F c +, F 6 = (F c −)*, where |F 1| and F 4 correspond to the observation and the model of the native crystal, respectively, and |F 2|, |F 3|, F 5 and F 6 refer to the Friedel pair observations and models of the derivative crystal. If any of the E observations are symmetrically equivalent, for instance centric Friedel pair intensities, the equation is reduced appropriately so as to only include non-equivalent observations and models. The incorporation of prior phase information by the refinement function is especially useful in the early and middle stages of model building and at all stages of structure solution at lower resolutions, owing to the improvement in the observation-to-parameter ratio. The refinement of a well resolved high-resolution structure is often best achieved using the simple Rice function. Fig. 1 ▶ shows the effect of various likelihood functions on automatic model building using ARP/wARP (Perrakis et al., 1999 ▶). 2.3. Twin refinement The function used for twin refinement is a generalization of the Rice distribution in the presence of a linear relationship between the observed intensities. This function has the form where N o and N model are normalization coefficients. In the first equation, the first term inside the integral, P(I o; F), represents the probability distribution of observations if ‘ideal’ structure factors are known. Here, all reflections that are twinned and that can be grouped together are included. Models representing the data-collection instrument, if available, could be added to this term. The second term, P(F; model), represents a probability distribution of the ‘ideal’ structure factors should an atomic model be known for a single crystal. Here, all reflections from the asymmetric unit that contribute to the observed ‘twinned’ intensities are included. If the data were to come from more than one crystal or if, for example, SAD should be used simultaneously with twinning, then this term would need to be modified appropriately. F c is a function of atomic and overall parameter D. Overall parameters also include Σ and twin-fraction parameters. f represents the way structure factors from the asymmetric unit contribute to the particular ‘twinned’ intensity. The above formula is more symbolic rather than precise; further details of twin refinement will be published elsewhere. REFMAC5 performs the following preparations before starting refinement against twinned data. (i) Identify potential (pseudo)merohedral twin operators by analyses of cell/space-group combination using the algorithm developed by Lebedev et al. (2006 ▶). (ii) Calculate R merge for each potential twin operator and filter out twin operators for which R merge is greater than 0.5 or a user-defined value. (iii) Estimate twin fractions for the remaining twin domains and filter out those with small twin fractions (the default value is 0.05). (iv) Make sure that the point group and twin operators form a group. Strictly speaking this stage is not necessary, but it makes bookkeeping easy. (v) Perform twin refinement using the remaining twin operators. Twin fractions are refined at every cycle. All integrals necessary for evaluation of the minus log-likelihood function and its derivatives with respect to the structure factors are evaluated using the Laplace approximation (McKay, 2003 ▶). 2.4. Modelling bulk-solvent contribution Typically, a significant part of a macromolecular crystal is occupied by disordered solvent. Accurate modelling of this part of the crystal is still an unsolved problem of MX. The contribution of bulk solvent to structure factors is strongest at low resolution, although its effect at high resolution is still non-negligible. The absence of good models for disordered solvent may be one of the reasons why R factors in MX are significantly higher than those in small-molecular crystallography. For small molecules R factors can be around 1%, whereas for MX they are rarely less than 10% and more often around 20% or even higher. REFMAC5 uses two types of bulk (disordered) solvent models. One of them is the so-called Babinet’s bulk-solvent model, which is based on the assumption that the only difference between solvent and protein at low resolution is their scale factor (Tronrud, 1997 ▶). Here, we use a slight modification of the formulation described by Tronrud (1997 ▶) and assume that if protein electron density is convoluted using the Gaussian kernel and multiplied by an appropriate scale factor, then protein and solvent electron densities are equal, where * denotes convolution, denotes the Fourier transform and k babinet = k babinet0exp(−B babinet|s|2/4). Here, we used the convolution theorem, which states that the Fourier transform of the convolution of two functions is the product of their Fourier transforms. The second bulk-solvent model is derived similarly to that described by Jiang & Brünger (1994 ▶). The basic assumption is that disordered solvent atoms are uniformly distributed over the region of the asymmetric unit that is not occupied by the atoms of the modelled part of the crystal structure. The region of the asymmetric unit occupied by the atomic model is masked out. Any holes inside this mask are removed using a cavity-detection algorithm. A constant value is assigned outside this region and the structure factors F mask are calculated using an FFT algorithm. These structure factors, multiplied by appropriate scale factors (estimated during the scaling procedure), are added to those calculated from the atomic model. Additionally, various mask parameters may optionally be optimized. One should be careful with bulk-solvent corrections, especially when the atomic model is incomplete. This type of bulk-solvent model may result in smeared-out electron density that may reduce the height of electron density in less-ordered and unmodelled parts of the crystal. The final total structure factors with scale and solvent contributions included take the following form: where the ks are scale factors, s is the reciprocal-space vector, |s| is the length of this vector, U aniso is the crystallographic anisotropic tensor that obeys crystal symmetry, F mask is the contribution from the mask bulk solvent and F protein is the contribution from the protein part of the crystal. Usually, either mask or Babinet bulk-solvent correction is used. However, sometimes their combination may provide better statistics (lower R factors) than either individually. The overall parameters of the solvent models, the overall anisotropy and the scale factors are estimated using a least-squares fit of the amplitude of the total structure factors to the observed amplitudes, In the case of twin refinement, the following function is used to estimate overall parameters including twin fractions (details of twin refinement will be published elsewhere), where f(α, F) is as defined in (8). Both (11) and (12) are minimized using the Gauss–Newton method with eigenvalue filtering to solve linear equations, which ensures that even very highly correlated parameters can be estimated simultaneously. However, one should be careful in interpretating these parameters as the system is highly correlated. Once overall parameters such as the scale factors and twin fractions have been estimated, REFMAC5 estimates the overall parameters of one of the abovementioned likelihood functions and evaluates the function and its derivatives with respect to the atomic parameters. A general description of this procedure can be found in Steiner et al. (2003 ▶). 2.5. Geometry component The function controlling the geometry has several components. (i) Chemical information about the constituent blocks (e.g. amino acids, nucleic acids, ligands) of macromolecules and the covalent links between them. (ii) Internal consistency of macromolecules (e.g. NCS). (iii) Structural knowledge (known structures, restraints on current interatomic distances, secondary structures). The first component is used by all programs and has been tabulated in an mmCIF dictionary (Vagin et al., 2004 ▶) now used by several programs, including REFMAC5, phenix.refine (Adams et al., 2010 ▶) and Coot (Emsley & Cowtan, 2004 ▶). The current version of the dictionary contains around 9000 entries and several hundred covalent-link descriptions. Any new entries may be added using one of several programs, including Sketcher (Vagin et al., 2004 ▶) from CCP4 (Collaborative Computational Project, Number 4, 1994 ▶), JLigand (unpublished work), PRODRG (Schüttelkopf & van Aalten, 2004 ▶) and phenix.elbow (Adams et al., 2010 ▶). Standard restraints on the covalent structure have the general form where bm represents a geometric parameter (e.g. bonds, angles, chiralities) calculated from the model and bi is the ideal value of this particular geometric parameter as tabulated in the dictionary. Apart from ω (the angle of the peptide bond) and χ (the angles of amino-acid side chains), torsion angles in general are not restrained by default. However, the user can request to restrain a particular torsion angle defined in the dictionary or can define general torsion angles and use them as restraints. In general, it is not clear how to handle the restraint on torsion angles automatically, as these angles may depend on the covalent structure as well as the chemical environment of a particular ligand. 2.6. Noncrystallographic symmetry restraints 2.6.1. Automatic NCS definition Automatic NCS identification in REFMAC5 is performed using the following procedure. (i) Align the sequences of all chains with all chains using the dynamic alignment algorithm (Needleman & Wunsch, 1970 ▶). (ii) Accept the alignment if the number of aligned residues is more than k (default 15) residues and the sequence identity for aligned residues is more than α% (default 80%). (iii) Calculate the global root-mean-square deviation (r.m.s.d.) using all aligned residues. (iv) Calculate the average local r.m.s.d. using the formula where N is the number of aligned residues, j indexes the aligned residues, Nj is the number of corresponding atoms in residue j, n j is the number of atoms in the ith group, rl is the vector of differences between corresponding atomic positions and Rj and tj are the rotation and translation that give the best superposition between atoms in group i. To calculate the r.m.s.d., it is not necessary to calculate the rotation and translation operators explicitly or to apply these transformations to atoms. Rather, it is achieved implicitly using Procrustes analysis, as described, for example, in Mardia & Bibby (1979 ▶). When k = N, the local and global r.m.s.d. coincide. (v) If the r.m.s.d. is less than β Å (default 2.5 Å), then we consider the chains to be aligned. (vi) Prepare the list of aligned atoms. If after applying the transformation matrix (calculated using aligned atoms) the neighbours (waters, ligands) of aligned atoms are superimposed, then they are also added to the list of aligned atoms. (vii) If local NCS is requested, then prepare pairs of corresponding interatomic distances. Steps (i)–(v) are performed once during each session of refinement. Step (vi) is performed during every cycle of refinement in order to allow conformational changes to occur. 2.6.2. Global NCS For global NCS restraints, transformation operators (Rij and tij ) that optimally superpose all NCS-related molecules are estimated and the following residual is added to the total target function, where the weight w is a user-controllable parameter. Note that the transformation matrices are estimated using xi and xj and thus they are dependent on these parameters. Therefore, in principle the gradient and second-derivative calculations should take this dependence into account, although this dependence is ignored in the current version of REFMAC5. Ignoring the contribution of these terms may reduce the rate of convergence, although in practice it does not seem to pose a problem. 2.6.3. Local NCS The following function (similar to the implementation in BUSTER) is used for local NCS restraints, where GM is the Geman–McClure robust estimator function (Geman & McClure, 1987 ▶), which can be written Fig. 2 ▶ shows that for small values of r this function is similar to the usual least-squares function. However, it behaves differently for large r: least-square residuals do not allow conformational changes to occur, whereas this type of function is more tolerant to such changes. 2.6.4. External structure restraints The interatomic distances within the structure being analysed may be similar to a known (deposited) structure, particularly in localized regions. In cases where it makes sense, this information can be exploited in order to aid the refinement of the target structure. In doing so, the target structure is pulled towards the con­formation adopted by the known structure. The mechanism for generic external restraints described by Mooij et al. (2009 ▶) is used for external structure restraints. In our implementation, structural information from external known structures is utilized by applying restraints to the distances between atom pairs based on a presumed atomic correspondence between the two structures. The following function is used for external structure restraints, where the atoms ai belong to the set A of atoms for which a correspondence is known, dij is the distance between the positions of atoms ai and aj , is the corresponding distance in the known structure, σ ij is the estimated standard deviation of dij about and d max ensures that atom pairs are only restrained within localized regions, allowing insensitivity to global conformational changes. External structure restraints should be weighted differently to the other geometry com­ponents in order to allow the restraint strength to be separately specified. Consequently, a weight w ext is applied, which should be appropriately chosen depending on the data quality and resolution, the structural similarity between the external known structure and the target, and the choice of d max. The Geman–McClure function with sensitivity parameter σGM is used to increase robustness to outliers, as with the local NCS restraints. Prior information from the external known structure(s) is generated using the software tool PROSMART. Specifically, this includes the atomic correspondence A, distances , standard deviations σ ij and the distance cutoff d max. Potential sources of prior structural information include different conformations of the target chain (such as those that may result from using different crystallization conditions or in a different binding state) as well as those from homologous or structurally similar proteins. It is possible to use multiple known structures as prior information. The combination of this information results in modified values of and σ ij as appropriate. This allows a structure to be refined utilizing information from a whole class of similar structures, rather than just a single source. Furthermore, it opens up the future possibility for multi-crystal co-refinement. The employed formalism also allows the application of atomic distance restraints to secondary-structure elements (and, in principle, other motifs). Consequently, external restraints may be applied without requiring the prior identification of known structures similar to the target. This is intended to help to refine such motifs towards the expected/presumed local conformation. This technique has been found to be particularly useful for low-resolution crystals and in cases where the target structure is unable to be refined to a satisfactory level. When used appropriately, external structure restraints should increase refinement reliability. Consequently, the difference between the R and R free values is expected to decrease in successful cases. Fig. 3 ▶ shows the refinement statistics resulting from using external restraints to refine a low-resolution bluetongue virus VP4 enzyme (Sutton et al., 2007 ▶). A sequence-identical structure solved at a higher resolution is used as prior information. Refinement statistics are compared after ten refinement cycles with and without using external restraints. Using the external restraints results in a 2.8% improvement in R free. Furthermore, the difference between the R and R free values is reduced from 11.5 to 4.3%, suggesting greatly increased refinement reliability. 2.6.5. ‘Jelly-body’ restraints The ratio of the number of observations to the number of adjustable parameters is very small at low resolution. Even after accounting for chemical restraints, this ratio stays very small and refinement in such cases is usually unstable. The danger of overfitting is very high; this is reflected in large differences between the R and R free values. External structure restraints and the use of experimental phase information (described above) provide ways of dealing with this problem. Unfortunately, it is not always possible to find similar structures refined at high resolution (or at least ones that result in a sufficiently successful improvement in refinement statistics) and experimental phase information is not always available or sufficient. Fortunately, statistical techniques exist to deal with this type of problem. Such techniques include ridge regression (Stuart et al., 2009 ▶), the lasso estimation procedure (Tibshirani, 1997 ▶) and Bayesian estimation with prior knowledge of parameters (O’Hagan, 1994 ▶). REFMAC5 has a regularization function in interatomic distance space that has the form for pairs of atoms i, j from the same chain, with maximum radius d max, which can be controlled (default 4.25 Å). Note that this term does not contribute to the value of the function or its gradient; it only changes the second derivative, thus changing the search direction. It should be noted that a similar technique has been implemented in CNS (Schröder et al., 2010 ▶). Note that if all interatomic distances were constrained, then individual atomic refinement would become rigid-body refinement. The effect of ‘jelly-body’ restraints is the implicit parameterization between the rigid body and individual atoms. This technique has strong similarity to elastic network model calculations (Trion, 1996 ▶). This simple formula has been found to work surprisingly well. 2.6.6. Atomic displacement parameter restraints Unlike positional parameters, where prior knowledge can be designed using basic knowledge of the chemistry of the building blocks of macromolecules and analysis of high-resolution structures, it is not obvious how to design restraints for atomic displace­ment parameters (ADPs). Ideally, restraints should reflect the geometry of the molecules as well as their overall mobility. Various programs use various restraints (Sheldrick, 2008 ▶; Adams et al., 2010 ▶; Konnert & Hendrickson, 1980 ▶; Murshudov et al., 1997 ▶). In the new version of REFMAC5, restraints on ADPs are based on the distances between distributions. If we assume that atoms are represented as Gaussian distributions, then we are able to design restraints based on the distance between such distributions. For a given two distributions in three-dimensional space P(x) and Q(x), the symmetrized Kullback–Liebler (KL) divergence (McKay, 2003 ▶) is defined as follows: It can be verified that the symmetrized KL divergence satisfies the conditions of a metric distance in the space of distributions. The KL divergence can also be represented as follows: This distance changes more smoothly than the L 2 distance between functions and seems to be a useful criterion for the design of approximate probability distributions (McKay, 2003 ▶; O’Hagan, 1994 ▶). When both distributions are Gaussian with mean zero, this distance has an elegant form. Assume that both atoms have Gaussian distribution: In this case, the KL divergence becomes In the case of isotropic ADPs, KL has an even simpler form: REFMAC5 uses restraints based on the KL divergence: The summation is over all atom pairs with distance less than r max. The weights depend on the nature of the bonds as well as on the distance between the atoms. If atoms are bonded or angle-related then the weight is larger. However, the weight is smaller if the atoms are not related by covalent bonds. Moreover, if the distance between the atoms is more than 3 Å then the weight decreases as follows: where w 0,ij is the weight for nonbonded atoms that are closer than 3 Å to each other. 2.6.7. Rigid-bond restraints For anisotropic atoms there are so-called rigid-bond restraints, based on the idea of rigid-bond tests of anisotropic atoms (Hirshfeld, 1976 ▶). The idea is that projections of U values on the bond vector joining two atoms should be similar. In other words, if two atoms are bonded then an oscillation across the bond is more likely than an independent oscillation along the bond. Atoms oscillate along the bond in a concerted fashion. Rigid-bond restraints are designed as follows. Let us assume that two atoms have positions x 1 and x 2 and their corresponding ADPs are U 1 and U 2; the unit vector joining these atoms is then calculated, The projections of corresponding U values on this vector are then calculated as Now, using these projections, the KL divergence is formed for all pairs and added to the target function: Again, the weights depend on the nature of the bonds between the atoms and the distances between them. Note that if the ADPs of both bonded atoms are isotropic then the rigid-bond restraint is equivalent to the above-described KL restraint. 2.6.8. Sphericity restraints To avoid atoms exploding and becoming too elliptical or, even worse, non-elliptical, REFMAC5 uses restraints on sphericity. It is a simple restraint: an isotropic equivalent of the anisotropic tensor, where k indexes the anisotropic atoms, i, j are components of the anisotropic tensor and wk are weights for this particular type of restraint. The weights depend on the number of other restraints (KL, rigid bond) on this atom. Atoms that have fewer restraints have stronger weights on sphericity, since these atoms are more likely to be unstable. It should be noted that similar restraints on ADPs are used in several other refinement programs (Sheldrick, 2008 ▶; Adams et al., 2010 ▶). 3. Parameterization 3.1. General parameters REFMAC5 uses the standard parameterization of molecules in terms of atomic coordinates and isotropic/anisotropic atomic displacement parameters. The refinement of these parameters is performed using an FFT formulation for gradients and approximations for second derivatives. Details of these formulations have been published elsewhere (Murshudov et al., 1997 ▶, 1999 ▶; Steiner et al., 2003 ▶). Once the gradients and approximate second derivatives have been calculated for these parameters, they are used to calculate the derivatives of derived parameters. Derived parameters include those for rigid-body and TLS refinement. 3.2. Rigid body Rigid-body parameterization is achieved as follows. For each rigid group, transformation operators are defined and new positions are calculated from the starting positions using the formula where Rj is the rotation matrix, t origin is the centre of mass of the rigid group and tj is the translational component of the transformation. The x old are the starting coordinates of the atoms and x new are their positions after application of the transformation operators. There are six parameters per rigid group, defining the rotation matrix and the translational component. At each cycle of refinement, an eigenvalue-filtering technique is used to avoid potential singularities arising from the shape of the rigid groups. It should be noted that no terms between rigid groups are calculated for the approximate second-derivative matrix. For large rigid groups this does not pose much of a problem. However, for many small rigid groups it may slow down convergence substantially. In any case, it is not recommended to divide molecules into very small rigid groups. For these cases, ‘jelly-body’ refinement should produce better results. Once derivatives with respect to the positional parameters have been calculated, those for rigid-body parameters are calculated using the chain rule. The current version of REFMAC5 uses an Euler angle parameterization. 3.3. TLS Atomic displacement parameters describe the spread of atomic positions and can be derived from the Fourier transform of a Gaussian probability distribution function for the atomic centre. The atomic displacement parameters are an important part of the model. Traditionally, a single parameter describing isotropic displacements has been used, namely the B factor. However, it is well known that atomic displacements are likely to be anisotropic owing to directional bonding and at high resolutions the six parameters per atom of a fully anisotropic model can be refined. TLS refinement is a way of modelling anisotropic displacements using only a few parameters, so that the method can be used at medium and low resolutions. The TLS model was originally proposed for small-molecule crystallography (Schomaker & Trueblood, 1968 ▶) and was incorporated into REFMAC5 almost ten years ago (Winn et al., 2001 ▶). The idea behind TLS is to suppose that groups of atoms move as rigid bodies and to constrain the anisotropic displace­ment parameters of these atoms accordingly. The rigid-body motion is described by translation (T), libration (L) and screw (S) tensors, using a total of 20 parameters for each rigid body. Given values for these 20 parameters, anisotropic displacement parameters can be derived for each atom in the group (and this relationship also allows one to calculate derivatives via the chain rule). Usually, an extra isotropic displacement parameter (the residual B factor) is refined for each atom in addition to the TLS contribution. The sum of these two con­tributions can be output using the supplementary program TLSANL (Howlin et al., 1993 ▶) or optionally directly from REFMAC5. TLS groups need to be chosen before refinement and constitute part of the definition of the model for the macromolecule. Groups of atoms should conform to the idea that they move as a quasi-rigid body. Often the choice of one group per chain suffices (or at least serves as a reference calculation) and this is the default in REFMAC5. More detailed choices can be made using methods such as TLSMD (Painter & Merritt, 2006 ▶). By default, REFMAC5 also includes waters in the first hydration shell, which it seems reasonable to assume move in concert with the protein chain. Fig. 4 ▶ shows the effect of TLS refinement and orientation of libration tensors. In this case, TLS refinement improves R/R free and the derived libration tensors make biological sense. 4. Optimization REFMAC5 uses the Gauss–Newton method for optimization. For an elegant and comprehensive review on optimization techniques, see Nocedal & Wright (1999 ▶). In this method, the exact second derivative is not calculated, but rather approximated to make sure it is always non-negative. Once derivatives or approximations have been calculated, the following linear equation is built, where H is the approximate second derivative and G is the gradient vector. The contribution of most of the geometrical terms are calculated using algorithms designed for quadratic optimization or least-squares fitting (Press et al., 1992 ▶). To calculate the contribution from the Geman–McClure terms, the following approximation is used (Huber & Ronchetti, 2009 ▶), This approximation ensures that H stays non-negative and consequently directions calculated as a result of the solution of (32) point towards a reduction of the total function. The contribution of the X-ray term to the gradient is calculated using FFT algorithms (Murshudov et al., 1997 ▶). The Fisher information matrix, as described by Steiner et al. (2003 ▶), is used to calculate the contribution of the likelihood functions to the matrix H. Tests have demonstrated that using the diagonal elements of the Fisher information matrix and both diagonal and nondiagonal elements of the geometry terms results in a more stable refinement. Once all of the terms contributing to H and G have been calculated, the linear equation (32) is solved using preconditioned conjugate-gradient methods (Nocedal & Wright, 1999 ▶; Tronrud, 1992 ▶). A diagonal matrix formed by the diagonal elements of H is used as a preconditioner. This brings parameters with different overall scales (positional and B values) onto the same scale and controlling convergence becomes easier. If the conjugate-gradient procedure does not converge in N maxiter cycles (the default is 1000), then the diagonal terms of the H matrix are increased. Thus, if the matrix is not positive then ridge regression is activated. In the presence of a potential (near-) singularity, REFMAC5 uses the following procedure to solve the linear equation. (i) Define and use preconditioner. At this stage, H and G are modified. Define the new matrix by H 1 and vector by G 1. (ii) Set γ = 0. (iii) Define a new matrix: H 2 = H 1 + γI, where I is the identity matrix. (iv) Solve the equation H 2 p = −G 1 using the conjugate-gradient method for linear equations for sparse and positive-definite matrices (Press et al., 1992 ▶). If convergence was achieved in less than N maxiter iterations, then proceed to the next step. Otherwise, increase γ and go to step (iii). (v) Decondition the matrix, gradient and shift vectors. (vi) Apply shifts to the atomic parameters, making sure that the ADPs are positive. (vii) Calculate the value of the total function. (viii) If the value of the total function is less than the previous value, then proceed to the next step. Otherwise, reduce the shifts and repeat steps (vi)–(viii). (ix) Finish the refinement cycle. After application of the shifts, the next cycle of refinement starts. 5. Conclusions Refinement is an important step in macromolecular crystal structure elucidation. It is used as a final step in structure solution, as well as as an intermediate step to improve models and obtain improved electron density to facilitate further model rebuilding. REFMAC5 is one of the refinement programs that incorporates various tools to deal with some crystal peculiarities, low-resolution MX structure refinement and high-resolution refinement. There are also tabulated dictionaries of the constituent blocks of macromolecules, cofactors and ligands. The number of dictionary elements now exceeds 9000. There are also tools to deal with new ligands and covalent modifications of ligands and/or proteins. Low-resolution MX structure analysis is still a challenging task. There are several outstanding problems that need to be dealt with before we can claim that low-resolution MX analysis is complete. Statistics, image processing and computer science provide general methods for these and related problems. Unfortunately, these techniques cannot be directly applied to MX structure analysis, either because of the huge computer resources needed or because the assumptions used are not applicable to MX. In our opinion, the problems of state-of-the-art MX analysis that need urgent attention include the following. (i) Reparameterization depending on the quality and the amount of experimental data. Some tools implemented in REFMAC5 allow partial dealing with this problem. These tools include (a) restraining against known homologous structures, (b) ‘jelly-body’ restraints or refinement along implicit normal modes, (c) long-range ADP restraints based on KL divergence, (d) automatic local and global NCS restraints and (e) experimental phase-information restraints. However, low-resolution refinement and model (re)building is still not as automatic as for high-resolution structures. (ii) Statistical methods for peculiar crystals with low signal-to-noise ratio. Some of the implemented tools, such as likelihood-based twin refinement and SAD/SIRAS refinement, help in the analysis of some of the data produced by such crystals. The analysis of data from such peculiar crystals as OD disorder with or without twinning, multiple cells, translocational disorder or modulated crystals in general remains problematic. (iii) Another important problem is that of limited and noisy data. As a result of resolution cutoff (owing to the survival time of the crystal under X-ray irradiation or otherwise), the resultant electron density usually exhibits noise owing to series termination. If the resolution that the crystal actually diffracts to is the same as the resolution of the data, then series termination is not very serious as the signal dies out towards the limit of the resolution. However, in these cases the electron density becomes blurred, reflecting high mobility of the molecules or crystal disorder. When map sharpening is used, the signal is amplified and series termination becomes a serious problem. To reduce noise, it is necessary to work with the full Fourier transformation. In other words, resolution extension and the prediction of missing reflections becomes an important problem. The dramatic effect of such an approach for density modification at high resolution has been demonstrated by Altomare et al. (2008 ▶) and Sheldrick (2008 ▶). The direct replacement of missing reflections by calculated ones necessarily introduces bias towards model errors and may mask real signal. To avoid this, it is necessary to integrate over the errors in the model parameters (coordinates, B values, scale values and twin fractions). However, since the number of parameters is very large (sometimes exceeding 1 000 000), integration using available numerical techniques is not feasible. (iv) Error estimation. Despite the advances in MX, there have been few attempts to evaluate errors in the estimated parameters. Works attempting to deal with this problem are few and far between (Sheldrick, 2008 ▶). To complete MX structure analysis, it is necessary to develop and implement techniques for error estimation. If this is achieved, then incorrect structures could be eliminated while analysing the MX data and building the model. One of the promising approaches to this problem is the Gauss–Markov random field sampling technique (Hue & Held, 2005 ▶) using the (approximate) second derivative as a field-defining matrix. (v) Multicrystal refinement with the simultaneous multicrystal averaging of isomorphous or non-isomorphous crystals is one of the important directions for low-resolution refinement. If it is dealt with properly then the number of structures analysed at low resolution should increase substantially. Further improvement may consist of a combination of various experimental techniques. For example, the simultaneous treatment of electron-microscopy (EM) and MX data could increase the reliability of EM models and put MX models in the context of larger biological systems. The direct use of unmerged data is another direction in which refinement procedures could be developed. If this were achieved, then several long-standing problems could be easier to deal with. Two such problems are the following. (i) In general, the space group of a crystal should be considered as an adjustable parameter. If unmerged data are used, then space-group assumptions could be tested after every few sessions of refinement and model building. (ii) Dealing with the processes in the crystal during data collection requires unmerged data. One of the best-known such problems is radiation damage.

                Author and article information

                Contributors
                Role: Data curationRole: Formal analysisRole: Funding acquisitionRole: InvestigationRole: MethodologyRole: VisualizationRole: Writing – original draft
                Role: Data curationRole: Funding acquisitionRole: InvestigationRole: MethodologyRole: SoftwareRole: ValidationRole: VisualizationRole: Writing – original draft
                Role: Data curationRole: InvestigationRole: MethodologyRole: Visualization
                Role: Data curationRole: InvestigationRole: MethodologyRole: Visualization
                Role: Funding acquisitionRole: MethodologyRole: Validation
                Role: Data curationRole: Formal analysisRole: Funding acquisitionRole: InvestigationRole: Project administrationRole: ResourcesRole: SoftwareRole: ValidationRole: VisualizationRole: Writing – original draftRole: Writing – review and editing
                Role: Editor
                Role: Invited Editor
                Journal
                mBio
                mBio
                mbio
                mBio
                American Society for Microbiology (1752 N St., N.W., Washington, DC )
                2150-7511
                May 2024
                04 April 2024
                04 April 2024
                : 15
                : 5
                : e03414-23
                Affiliations
                [1 ]Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences, Hangzhou Normal University; , Hangzhou, China
                [2 ]Zhejiang Key Laboratory of Medical Epigenetics, Hangzhou Normal University; , Hangzhou, China
                [3 ]The Affiliated Hospital of Hangzhou Normal University, Hangzhou Normal University; , Hangzhou, China
                [4 ]Photosynthesis Research Center, College of Life and Environmental Sciences, Hangzhou Normal University; , Hangzhou, China
                University of Washington School of Medicine; , Seattle, Washington, USA
                Columbia University; , New York, New York, USA
                Author notes
                Address correspondence to Xiaoling Xu, xuxl@ 123456hznu.edu.cn

                Jiejie Shen and Wenping Wu contributed equally to this article. The order of author names was determined based on the actual contributions each author made to the work.

                The authors declare no conflict of interest.

                Author information
                https://orcid.org/0000-0002-3006-5783
                https://orcid.org/0000-0002-4905-6705
                https://orcid.org/0000-0001-8995-1213
                Article
                03414-23 mbio.03414-23
                10.1128/mbio.03414-23
                11077971
                38572988
                8ea7222c-ca03-4b74-b7b5-5464019e7e15
                Copyright © 2024 Shen et al.

                This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license.

                History
                : 08 January 2024
                : 13 March 2024
                Page count
                supplementary-material: 1, authors: 11, Figures: 6, Tables: 2, References: 50, Pages: 22, Words: 12283
                Funding
                Funded by: MOST | National Natural Science Foundation of China (NSFC);
                Award ID: 32000034
                Award Recipient :
                Funded by: MOST | National Natural Science Foundation of China (NSFC);
                Award ID: 31870740, 32171227
                Award Recipient :
                Funded by: MOST | National Natural Science Foundation of China (NSFC);
                Award ID: 82101628
                Award Recipient :
                Funded by: MOST | National Natural Science Foundation of China (NSFC);
                Award ID: 32201026
                Award Recipient :
                Funded by: MOST | NSFC | NSFC-Zhejiang Joint Fund | 浙江省科学技术厅 | Natural Science Foundation of Zhejiang Province (ZJNSF);
                Award ID: LR22C020002
                Award Recipient :
                Funded by: MOST | NSFC | NSFC-Zhejiang Joint Fund | 浙江省科学技术厅 | Natural Science Foundation of Zhejiang Province (ZJNSF);
                Award ID: LY24C010003
                Award Recipient :
                Categories
                Research Article
                molecular-and-cellular-biology, Molecular and Cellular Biology
                Custom metadata
                May 2024

                Life sciences
                acetyl-coa carboxylase,biotin carboxylase,biotin carboxyl carrier protein,carboxyltransferase,3-hydroxypropionate

                Comments

                Comment on this article

                Related Documents Log