Blog
About

1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Physician Empathy Is Not Associated with Laboratory Outcomes in Diabetes: a Cross-sectional Study

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references 58

          • Record: found
          • Abstract: found
          • Article: not found

          Why Most Published Research Findings Are False

          Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof. Modeling the Framework for False Positive Findings Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings. It can be proven that most claimed research findings are false As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance [10,11]. Consider a 2 × 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fields where either there is only one true relationship (among many that can be hypothesized) or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R/(R + 1). The probability of a study finding a true relationship reflects the power 1 - β (one minus the Type II error rate). The probability of claiming a relationship when none truly exists reflects the Type I error rate, α. Assuming that c relationships are being probed in the field, the expected values of the 2 × 2 table are given in Table 1. After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 - β)R/(R - βR + α). A research finding is thus more likely true than false if (1 - β)R > α. Since usually the vast majority of investigators depend on a = 0.05, this means that a research finding is more likely true than false if (1 - β)R > 0.05. What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 × 2 tables. Bias First, let us define bias as the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced. Let u be the proportion of probed analyses that would not have been “research findings,” but nevertheless end up presented and reported as such, because of bias. Bias should not be confused with chance variability that causes some findings to be false by chance even though the study design, data, analysis, and presentation are perfect. Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias. We may assume that u does not depend on whether a true relationship exists or not. This is not an unreasonable assumption, since typically it is impossible to know which relationships are indeed true. In the presence of bias (Table 2), one gets PPV = ([1 - β]R + uβR)/(R + α − βR + u − uα + uβR), and PPV decreases with increasing u, unless 1 − β ≤ α, i.e., 1 − β ≤ 0.05 for most situations. Thus, with increasing bias, the chances that a research finding is true diminish considerably. This is shown for different levels of power and for different pre-study odds in Figure 1. Conversely, true research findings may occasionally be annulled because of reverse bias. For example, with large measurement errors relationships are lost in noise [12], or investigators use data inefficiently or fail to notice statistically significant relationships, or there may be conflicts of interest that tend to “bury” significant findings [13]. There is no good large-scale empirical evidence on how frequently such reverse bias may occur across diverse research fields. However, it is probably fair to say that reverse bias is not as common. Moreover measurement errors and inefficient use of data are probably becoming less frequent problems, since measurement error has decreased with technological advances in the molecular era and investigators are becoming increasingly sophisticated about their data. Regardless, reverse bias may be modeled in the same way as bias above. Also reverse bias should not be confused with chance variability that may lead to missing a true relationship because of chance. Testing by Several Independent Teams Several independent teams may be addressing the same sets of research questions. As research efforts are globalized, it is practically the rule that several research teams, often dozens of them, may probe the same or similar questions. Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation. An increasing number of questions have at least one study claiming a research finding, and this receives unilateral attention. The probability that at least one study, among several done on the same question, claims a statistically significant research finding is easy to estimate. For n independent studies of equal power, the 2 × 2 table is shown in Table 3: PPV = R(1 − β n )/(R + 1 − [1 − α] n − Rβ n ) (not considering bias). With increasing number of independent studies, PPV tends to decrease, unless 1 - β < a, i.e., typically 1 − β < 0.05. This is shown for different levels of power and for different pre-study odds in Figure 2. For n studies of different power, the term β n is replaced by the product of the terms β i for i = 1 to n, but inferences are similar. Corollaries A practical example is shown in Box 1. Based on the above considerations, one may deduce several interesting corollaries about the probability that a research finding is indeed true. Box 1. An Example: Science at Low Pre-Study Odds Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10−4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R/(R + 1) = 10−4. Let us also suppose that the study has 60% power to find an association with an odds ratio of 1.3 at α = 0.05. Then it can be estimated that if a statistically significant association is found with the p-value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 × 10−4. Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specified, changes in the disease or control definitions, and various combinations of selective or distorted reporting of the results. Commercially available “data mining” packages actually are proud of their ability to yield statistically significant results through data dredging. In the presence of bias with u = 0.10, the post-study probability that a research finding is true is only 4.4 × 10−4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them finds a formally statistically significant association, the probability that the research finding is true is only 1.5 × 10−4, hardly any higher than the probability we had before any of this extensive research was undertaken! Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Small sample size means smaller power and, for all functions above, the PPV for a true research finding decreases as power decreases towards 1 − β = 0.05. Thus, other factors being equal, research findings are more likely true in scientific fields that undertake large studies, such as randomized controlled trials in cardiology (several thousand subjects randomized) [14] than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller) [15]. Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease (relative risks 3–20), than in scientific fields where postulated effects are small, such as genetic risk factors for multigenetic diseases (relative risks 1.1–1.5) [7]. Modern epidemiology is increasingly obliged to target smaller effect sizes [16]. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors. Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. As shown above, the post-study probability that a finding is true (PPV) depends a lot on the pre-study odds (R). Thus, research findings are more likely true in confirmatory designs, such as large phase III randomized controlled trials, or meta-analyses thereof, than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information, such as microarrays and other high-throughput discovery-oriented research [4,8,17], should have extremely low PPV. Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias, u. For several research designs, e.g., randomized controlled trials [18–20] or meta-analyses [21,22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes) [23]. Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test) [24] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails [25]. Simply abolishing selective publication would not make this problem go away. Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28]. Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [29]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [29]. These corollaries consider each factor separately, but these factors often influence each other. For example, investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field, further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely, the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research, enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation. Most Research Findings Are False for Most Research Designs and for Most Fields In the described framework, a PPV exceeding 50% is quite difficult to get. Table 4 provides the results of simulations using the formulas developed for the influence of power, ratio of true to non-true relationships, and bias, for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases, but power and pre-test chances are higher compared to a single randomized trial. Conversely, a meta-analytic finding from inconclusive studies where pooling is used to “correct” the low power of single studies, is probably false if R ≤ 1:3. Research findings from underpowered, early-phase clinical trials would be true about one in four times, or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse, especially when underpowered, but even well-powered epidemiological studies may have only a one in five chance being true, if R = 1:10. Finally, in discovery-oriented research with massive testing, where tested relationships exceed true ones 1,000-fold (e.g., 30,000 genes tested, of which 30 may be the true culprits) [30,31], PPV for each claimed relationship is extremely low, even with considerable standardization of laboratory and statistical methods, outcomes, and reporting thereof to minimize bias. Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias As shown, the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias. For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases. For fields with very low PPV, the few true relationships would not distort this overall picture much. Even if a few relationships are true, the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results. Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a “null field.” However, other lines of evidence, or advances in technology and experimentation, may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods, technologies, and conflicts may be operating. How Can We Improve the Situation? Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability. Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown “gold” standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [32–34]. Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized trials [35]. Registration would pose a challenge for hypothesis-generating research. Some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment. Regardless, even if we do not see a great deal of progress with registration of studies in other fields, the principles of developing and adhering to a protocol could be more widely borrowed from randomized controlled trials. Finally, instead of chasing statistical significance, we should improve our understanding of the range of R values—the pre-study odds—where research efforts operate [10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test [36]. Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant.

            In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Management of Hyperglycemia in Type 2 Diabetes: A Patient-Centered Approach

              Glycemic management in type 2 diabetes mellitus has become increasingly complex and, to some extent, controversial, with a widening array of pharmacological agents now available (1–5), mounting concerns about their potential adverse effects and new uncertainties regarding the benefits of intensive glycemic control on macrovascular complications (6–9). Many clinicians are therefore perplexed as to the optimal strategies for their patients. As a consequence, the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD) convened a joint task force to examine the evidence and develop recommendations for antihyperglycemic therapy in nonpregnant adults with type 2 diabetes. Several guideline documents have been developed by members of these two organizations (10) and by other societies and federations (2,11–15). However, an update was deemed necessary because of contemporary information on the benefits/risks of glycemic control, recent evidence concerning efficacy and safety of several new drug classes (16,17), the withdrawal/restriction of others, and increasing calls for a move toward more patient-centered care (18,19). This statement has been written incorporating the best available evidence and, where solid support does not exist, using the experience and insight of the writing group, incorporating an extensive review by additional experts (acknowledged below). The document refers to glycemic control; yet this clearly needs to be pursued within a multifactorial risk reduction framework. This stems from the fact that patients with type 2 diabetes are at increased risk of cardiovascular morbidity and mortality; the aggressive management of cardiovascular risk factors (blood pressure and lipid therapy, antiplatelet treatment, and smoking cessation) is likely to have even greater benefits. These recommendations should be considered within the context of the needs, preferences, and tolerances of each patient; individualization of treatment is the cornerstone of success. Our recommendations are less prescriptive than and not as algorithmic as prior guidelines. This follows from the general lack of comparative-effectiveness research in this area. Our intent is therefore to encourage an appreciation of the variable and progressive nature of type 2 diabetes, the specific role of each drug, the patient and disease factors that drive clinical decision making (20–23), and the constraints imposed by age and comorbidity (4,6). The implementation of these guidelines will require thoughtful clinicians to integrate current evidence with other constraints and imperatives in the context of patient-specific factors. PATIENT-CENTERED APPROACH Evidence-based advice depends on the existence of primary source evidence. This emerges only from clinical trial results in highly selected patients, using limited strategies. It does not address the range of choices available, or the order of use of additional therapies. Even if such evidence were available, the data would show median responses and not address the vital question of who responded to which therapy and why (24). Patient-centered care is defined as an approach to “providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions” (25). This should be the organizing principle underlying health care for individuals with any chronic disease, but given our uncertainties in terms of choice or sequence of therapy, it is particularly appropriate in type 2 diabetes. Ultimately, it is patients who make the final decisions regarding their lifestyle choices and, to some degree, the pharmaceutical interventions they use; their implementation occurs in the context of the patients’ real lives and relies on the consumption of resources (both public and private). Patient involvement in the medical decision making constitutes one of the core principles of evidence-based medicine, which mandates the synthesis of best available evidence from the literature with the clinician's expertise and patient's own inclinations (26). During the clinical encounter, the patient's preferred level of involvement should be gauged and therapeutic choices explored, potentially with the utilization of decision aids (21). In a shared decision-making approach, clinician and patient act as partners, mutually exchanging information and deliberating on options, in order to reach a consensus on the therapeutic course of action (27). There is good evidence supporting the effectiveness of this approach (28). Importantly, engaging patients in health care decisions may enhance adherence to therapy. BACKGROUND Epidemiology and health care impact Both the prevalence and incidence of type 2 diabetes are increasing worldwide, particularly in developing countries, in conjunction with increased obesity rates and westernization of lifestyle. The attendant economic burden for health care systems is skyrocketing, owing to the costs associated with treatment and diabetes complications. Type 2 diabetes remains a leading cause of cardiovascular disorders, blindness, end-stage renal failure, amputations, and hospitalizations. It is also associated with increased risk of cancer, serious psychiatric illness, cognitive decline, chronic liver disease, accelerated arthritis, and other disabling or deadly conditions. Effective management strategies are of obvious importance. Relationship of glycemic control to outcomes It is well established that the risk of microvascular and macrovascular complications is related to glycemia, as measured by HbA1c; this remains a major focus of therapy (29). Prospective randomized trials have documented reduced rates of microvascular complications in type 2 diabetic patients treated to lower glycemic targets. In the UK Prospective Diabetes Study (UKPDS) (30,31), patients with newly diagnosed type 2 diabetes were randomized to two treatment policies. In the standard group, lifestyle intervention was the mainstay with pharmacological therapy used only if hyperglycemia became severe. In the more intensive treatment arm, patients were randomly assigned to either a sulfonylurea or insulin, with a subset of overweight patients randomized to metformin. The overall HbA1c achieved was 0.9% lower in the intensive policy group compared with the conventional policy arm (7.0% vs. 7.9%). Associated with this difference in glycemic control was a reduction in the risk of microvascular complications (retinopathy, nephropathy, neuropathy) with intensive therapy. A trend toward reduced rates of myocardial infarction in this group did not reach statistical significance (30). By contrast, substantially fewer metformin-treated patients experienced myocardial infarction, diabetes-related and all-cause mortality (32), despite a mean HbA1c only 0.6% lower than the conventional policy group. The UKPDS 10-year follow-up demonstrated that the relative benefit of having been in the intensive management policy group was maintained over a decade, resulting in the emergence of statistically significant benefits on cardiovascular disease (CVD) end points and total mortality in those initially assigned to sulfonylurea/insulin, and persistence of CVD benefits with metformin (33), in spite of the fact that the mean HbA1c levels between the groups converged soon after the randomized component of the trial had concluded. In 2008, three shorter-term studies [Action to Control Cardiovascular Risk in Diabetes (ACCORD) (34), Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified-Release Controlled Evaluation (ADVANCE) (35), Veterans Affairs Diabetes Trial (VADT) (36)] reported the effects of two levels of glycemic control on cardiovascular end points in middle-aged and older individuals with well-established type 2 diabetes at high risk for cardiovascular events. ACCORD and VADT aimed for an HbA1c 16.7–19.4 mmol/L [>300–350 mg/dL]) or HbA1c (e.g., ≥10.0–12.0%), insulin therapy should be strongly considered from the outset. Such treatment is mandatory when catabolic features are exhibited or, of course, if ketonuria is demonstrated, the latter reflecting profound insulin deficiency. Importantly, unless there is evidence of type 1 diabetes, once symptoms are relieved, glucotoxicity resolved, and the metabolic state stabilized, it may be possible to taper insulin partially or entirely, transferring to noninsulin antihyperglycemic agents, perhaps in combination. Figure 2 Antihyperglycemic therapy in type 2 diabetes: general recommendations. Moving from the top to the bottom of the figure, potential sequences of antihyperglycemic therapy. In most patients, begin with lifestyle changes; metformin monotherapy is added at, or soon after, diagnosis (unless there are explicit contraindications). If the HbA1c target is not achieved after ∼3 months, consider one of the five treatment options combined with metformin: a sulfonylurea, TZD, DPP-4 inhibitor, GLP-1 receptor agonist, or basal insulin. (The order in the chart is determined by historical introduction and route of administration and is not meant to denote any specific preference.) Choice is based on patient and drug characteristics, with the over-riding goal of improving glycemic control while minimizing side effects. Shared decision making with the patient may help in the selection of therapeutic options. The figure displays drugs commonly used both in the U.S. and/or Europe. Rapid-acting secretagogues (meglitinides) may be used in place of sulfonylureas. Other drugs not shown (α-glucosidase inhibitors, colesevelam, dopamine agonists, pramlintide) may be used where available in selected patients but have modest efficacy and/or limiting side effects. In patients intolerant of, or with contraindications for, metformin, select initial drug from other classes depicted and proceed accordingly. In this circumstance, while published trials are generally lacking, it is reasonable to consider three-drug combinations other than metformin. Insulin is likely to be more effective than most other agents as a third-line therapy, especially when HbA1c is very high (e.g., ≥9.0%). The therapeutic regimen should include some basal insulin before moving to more complex insulin strategies (Fig. 3). Dashed arrow line on the left-hand side of the figure denotes the option of a more rapid progression from a two-drug combination directly to multiple daily insulin doses, in those patients with severe hyperglycemia (e.g., HbA1c ≥10.0–12.0%). DPP-4-i, DPP-4 inhibitor; Fx's, bone fractures; GI, gastrointestinal; GLP-1-RA, GLP-1 receptor agonist; HF, heart failure; SU, sulfonylurea. aConsider beginning at this stage in patients with very high HbA1c (e.g., ≥9%). b Consider rapid-acting, nonsulfonylurea secretagogues (meglitinides) in patients with irregular meal schedules or who develop late postprandial hypoglycemia on sulfonylureas. c See Table 1 for additional potential adverse effects and risks, under “Disadvantages.” d Usually a basal insulin (NPH, glargine, detemir) in combination with noninsulin agents. e Certain noninsulin agents may be continued with insulin (see text). Refer to Fig. 3 for details on regimens. Consider beginning at this stage if patient presents with severe hyperglycemia (≥16.7–19.4 mmol/L [≥300–350 mg/dL]; HbA1c ≥10.0–12.0%) with or without catabolic features (weight loss, ketosis, etc.). If metformin cannot be used, another oral agent could be chosen, such as a sulfonylurea/glinide, pioglitazone, or a DPP-4 inhibitor; in occasional cases where weight loss is seen as an essential aspect of therapy, initial treatment with a GLP-1 receptor agonist might be useful. Where available, less commonly used drugs (AGIs, colesevelam, bromocriptine) might also be considered in selected patients, but their modest glycemic effects and side-effect profiles make them less attractive candidates. Specific patient preferences, characteristics, susceptibilities to side effects, potential for weight gain and hypoglycemia should play a major role in drug selection (20,21). (See Supplementary Figs. for adaptations of Fig. 2 that address specific patient scenarios.) Advancing to dual combination therapy. Figure 2 (and Supplementary Figs.) also depicts potential sequences of escalating glucose-lowering therapy beyond metformin. If monotherapy alone does not achieve/maintain an HbA1c target over ∼3 months, the next step would be to add a second oral agent, a GLP-1 receptor agonist, or basal insulin (5,10). Notably, the higher the HbA1c, the more likely insulin will be required. On average, any second agent is typically associated with an approximate further reduction in HbA1c of ∼1% (70,79). If no clinically meaningful glycemic reduction (i.e., “nonresponder”) is demonstrated, then, adherence having been investigated, that agent should be discontinued, and another with a different mechanism of action substituted. With a distinct paucity of long-term comparative-effectiveness trials available, uniform recommendations on the best agent to be combined with metformin cannot be made (80). Thus, advantages and disadvantages of specific drugs for each patient should be considered (Table 1). Some antihyperglycemic medications lead to weight gain. This may be associated with worsening markers of insulin resistance and cardiovascular risk. One exception may be TZDs (57); weight gain associated with this class occurs in association with decreased insulin resistance. Although there is no uniform evidence that increases in weight in the range observed with certain therapies translate into a substantially increased cardiovascular risk, it remains important to avoid unnecessary weight gain by optimal medication selection and dose titration. For all medications, consideration should also be given to overall tolerability. Even occasional hypoglycemia may be devastating, if severe, or merely irritating, if mild (81). Gastrointestinal side effects may be tolerated by some, but not others. Fluid retention may pose a clinical or merely an aesthetic problem (82). The risk of bone fractures may be a specific concern in postmenopausal women (57). It must be acknowledged that costs are a critical issue driving the selection of glucose-lowering agents in many environments. For resource-limited settings, less expensive agents should be chosen. However, due consideration should be also given to side effects and any necessary monitoring, with their own cost implications. Moreover, prevention of morbid long-term complications will likely reduce long-term expenses attributed to the disease. Advancing to triple combination therapy. Some studies have shown advantages of adding a third noninsulin agent to a two-drug combination that is not yet or no longer achieving the glycemic target (83–86). Not surprisingly, however, at this juncture, the most robust response will usually be with insulin. Indeed, since diabetes is associated with progressive β-cell loss, many patients, especially those with long-standing disease, will eventually need to be transitioned to insulin, which should be favored in circumstances where the degree of hyperglycemia (e.g., ≥8.5%) makes it unlikely that another drug will be of sufficient benefit (87). If triple combination therapy exclusive of insulin is tried, the patient should be monitored closely, with the approach promptly reconsidered if it proves to be unsuccessful. Many months of uncontrolled hyperglycemia should specifically be avoided. In using triple combinations the essential consideration is obviously to use agents with complementary mechanisms of action (Fig. 2 and Supplementary Figs.). Increasing the number of drugs heightens the potential for side effects and drug–drug interactions, raises costs, and negatively impacts patient adherence. The rationale, benefits, and side effects of each new medication should be discussed with the patient. The clinical characteristics of patients more or less likely to respond to specific combinations are, unfortunately, not well defined. Transitions to and titrations of insulin. Most patients express reluctance to beginning injectable therapy, but, if the practitioner feels that such a transition is important, encouragement and education can usually overcome such reticence. Insulin is typically begun at a low dose (e.g., 0.1–0.2 U kg−1 day−1), although larger amounts (0.3–0.4 U kg−1 day−1) are reasonable in the more severely hyperglycemic. The most convenient strategy is with a single injection of a basal insulin, with the timing of administration dependent on the patient's schedule and overall glucose profile (Fig. 3). Figure 3 Sequential insulin strategies in type 2 diabetes. Basal insulin alone is usually the optimal initial regimen, beginning at 0.1–0.2 units/kg body weight, depending on the degree of hyperglycemia. It is usually prescribed in conjunction with one to two noninsulin agents. In patients willing to take more than one injection and who have higher HbA1c levels (≥9.0%), twice-daily premixed insulin or a more advanced basal plus mealtime insulin regimen could also be considered (curved dashed arrow lines). When basal insulin has been titrated to an acceptable fasting glucose but HbA1c remains above target, consider proceeding to basal plus mealtime insulin, consisting of one to three injections of rapid-acting analogs (see text for details). A less studied alternative—progression from basal insulin to a twice-daily premixed insulin—could be also considered (straight dashed arrow line); if this is unsuccessful, move to basal plus mealtime insulin. The figure describes the number of injections required at each stage, together with the relative complexity and flexibility. Once a strategy is initiated, titration of the insulin dose is important, with dose adjustments made based on the prevailing glucose levels as reported by the patient. Noninsulin agents may be continued, although insulin secretagogues (sulfonylureas, meglitinides) are typically stopped once more complex regimens beyond basal insulin are utilized. Comprehensive education regarding self-monitoring of blood glucose, diet, exercise, and the avoidance of, and response to, hypoglycemia are critical in any patient on insulin therapy. Mod., moderate. Although extensive dosing instructions for insulin are beyond the scope of this statement, most patients can be taught to uptitrate their own insulin dose based on several algorithms, each essentially involving the addition of a small dose increase if hyperglycemia persists (74,76,88). For example, the addition of 1–2 units (or, in those already on higher doses, increments of 5–10%) to the daily dose once or twice weekly if the fasting glucose levels are above the preagreed target is a reasonable approach (89). As the target is neared, dosage adjustments should be more modest and occur less frequently. Downward adjustment is advisable if any hypoglycemia occurs. During self-titration, frequent contact (telephone, e-mail) with the clinician may be necessary. Practitioners themselves can, of course, also titrate basal insulin, but this would involve more intensive contact with the patient than typically available in routine clinical practice. Daily self-monitoring of blood glucose is of obvious importance during this phase. After the insulin dose is stabilized, the frequency of monitoring should be reviewed (90). Consideration should be given to the addition of prandial or mealtime insulin coverage when significant postprandial glucose excursions (e.g., to >10.0 mmol/L [>180 mg/dL]) occur. This is suggested when the fasting glucose is at target but the HbA1c remains above goal after 3–6 months of basal insulin titration (91). The same would apply if large drops in glucose occur during overnight hours or in between meals, as the basal insulin dose is increased. In this scenario, the basal insulin dose would obviously need to be simultaneously decreased as prandial insulin is initiated. Although basal insulin is titrated primarily against the fasting glucose, generally irrespective of the total dose, practitioners should be aware that the need for prandial insulin therapy will become likely the more the daily dose exceeds 0.5 U kg−1 day−1, especially as it approaches 1 U kg−1 day−1. The aim with mealtime insulin is to blunt postprandial glycemic excursions, which can be extreme in some individuals, resulting in poor control during the day. Such coverage may be provided by one of two methods. The most precise and flexible prandial coverage is possible with “basal-bolus” therapy, involving the addition of premeal rapid-acting insulin analog to ongoing basal insulin. One graduated approach is to add prandial insulin before the meal responsible for the largest glucose excursion—typically that with the greatest carbohydrate content, often, but not always, the evening meal (92). Subsequently, a second injection can be administered before the meal with the next largest excursion (often breakfast). Ultimately, a third injection may be added before the smallest meal (often lunch) (93). The actual glycemic benefits of these more advanced regimens after basal insulin are generally modest in typical patients (92). So, again, individualization of therapy is key, incorporating the degree of hyperglycemia needing to be addressed and the overall capacities of the patient. Importantly, data trends from self-monitoring may be particularly helpful in titrating insulins and their doses within these more advanced regimens to optimize control. A second, perhaps more convenient but less adaptable method involves “premixed” insulin, consisting of a fixed combination of an intermediate insulin with regular insulin or a rapid analog. Traditionally, this is administered twice daily, before morning and evening meals. In general, when compared with basal insulin alone, premixed regimens tend to lower HbA1c to a larger degree, but often at the expense of slightly more hypoglycemia and weight gain (94). Disadvantages include the inability to titrate the shorter- from the longer-acting component of these formulations. Therefore, this strategy is somewhat inflexible but may be appropriate for certain patients who eat regularly and may be in need of a simplified approach beyond basal insulin (92,93). (An older and less commonly used variation of this two-injection strategy is known as “split-mixed,” involving a fixed amount of intermediate insulin mixed by the patient with a variable amount of regular insulin or a rapid analog. This allows for greater flexibility in dosing.) The key messages from dozens of comparative insulin trials in type 2 diabetes include the following: 1. Any insulin will lower glucose and HbA1c. 2. All insulins are associated with some weight gain and some risk of hypoglycemia. 3. The larger the doses and the more aggressive the titration, the lower the HbA1c, but often with a greater likelihood of adverse effects. 4. Generally, long-acting insulin analogs reduce the incidence of overnight hypoglycemia, and rapid-acting insulin analogs reduce postprandial glucose excursions as compared with corresponding human insulins (NPH, Regular), but they generally do not result in clinically significantly lower HbA1c. Metformin is often continued when basal insulin is added, with studies demonstrating less weight gain when the two are used together (95). Insulin secretagogues do not seem to provide for additional HbA1c reduction or prevention of hypoglycemia or weight gain after insulin is started, especially after the dose is titrated and stabilized. When basal insulin is used, continuing the secretagogue may minimize initial deterioration of glycemic control. However, secretagogues should be avoided once prandial insulin regimens are employed. TZDs should be reduced in dose (or stopped) to avoid edema and excessive weight gain, although in certain individuals with large insulin requirements from severe insulin resistance, these insulin sensitizers may be very helpful in lowering HbA1c and minimizing the required insulin dose (96). Data concerning the glycemic benefits of incretin-based therapy combined with basal insulin are accumulating; combination with GLP-1 receptor agonists may be helpful in some patients (97,98). Once again, the costs of these more elaborate combined regimens must be carefully considered. OTHER CONSIDERATIONS Age Older adults (>65–70 years) often have a higher atherosclerotic disease burden, reduced renal function, and more comorbidities (99,100). Many are at risk for adverse events from polypharmacy and may be both socially and economically disadvantaged. Life expectancy is reduced, especially in the presence of long-term complications. They are also more likely to be compromised by hypoglycemia; for example, unsteadiness may result in falls and fractures (101), and a tenuous cardiac status may deteriorate into catastrophic events. It follows that glycemic targets for elderly with long-standing or more complicated disease should be less ambitious than for the younger, healthier individuals (20). If lower targets cannot be achieved with simple interventions, an HbA1c of <7.5–8.0% may be acceptable, transitioning upward as age increases and capacity for self-care, cognitive, psychological and economic status, and support systems decline. While lifestyle modification can be successfully implemented across all age-groups, in the aged, the choice of antihyperglycemic agent should focus on drug safety, especially protecting against hypoglycemia, heart failure, renal dysfunction, bone fractures, and drug–drug interactions. Strategies specifically minimizing the risk of low blood glucose may be preferred. In contrast, healthier patients with long life expectancy accrue risk for vascular complications over time. Therefore, lower glycemic targets (e.g., an HbA1c <6.5–7.0%) and tighter control of body weight, blood pressure, and circulating lipids should be achieved to prevent or delay such complications. This usually requires combination therapy, the early institution of which may have the best chance of modifying the disease process and preserving quality of life. Weight The majority of individuals with type 2 diabetes are overweight or obese (∼80%) (102). In these, intensive lifestyle intervention can improve fitness, glycemic control, and cardiovascular risk factors for relatively small changes in body weight (103). Although insulin resistance is thought of as the predominate driver of diabetes in obese patients, they actually have a similar degree of islet dysfunction to leaner patients (37). Perhaps as a result, the obese may be more likely to require combination drug therapy (20,104). While common practice has favored metformin in heavier patients, because of weight loss/weight neutrality, this drug is as efficacious in lean individuals (75). TZDs, on the other hand, appear to be more effective in those with higher BMIs, although their associated weight gain makes them, paradoxically, a less attractive option here. GLP-1 receptor agonists are associated with weight reduction (38), which in some patients may be substantial. Bariatric surgery is an increasingly popular option in severe obesity. Type 2 diabetes frequently resolves rapidly after these procedures. The majority of patients are able to stop some, or even all, of their antihyperglycemic medications, although the durability of this effect is not known (105). In lean patients, consideration should be given to the possibility of latent autoimmune diabetes in adults (LADA), a slowly progressive form of type 1 diabetes. These individuals, while presenting with mild hyperglycemia, often responsive to oral agents, eventually develop more severe hyperglycemia and require intensive insulin regimens (106). Measuring titres of islet-associated autoantibodies (e.g., anti-GAD) may aid their identification, encouraging a more rapid transition to insulin therapy. Sex/racial/ethnic/genetic differences While certain racial/ethnic features that increase the risk of diabetes are well recognized [greater insulin resistance in Latinos (107), more β-cell dysfunction in East Asians (108)], using this information to craft optimal therapeutic strategies is in its infancy. This is not surprising given the polygenic inheritance pattern of the disease. Indeed, while matching a drug's mechanism of action to the underlying causes of hyperglycemia in a specific patient seems logical, there are few data that compare strategies based on this approach (109). There are few exceptions, mainly involving diabetes monogenic variants often confused with type 2 diabetes, such as maturity-onset diabetes of the young (MODY), several forms of which respond preferentially to sulfonylureas (110). While there are no prominent sex differences in the response to various antihyperglycemic drugs, certain side effects (e.g., bone loss with TZDs) may be of greater concern in women. Comorbidities Coronary artery disease. Given the frequency with which type 2 diabetic patients develop atherosclerosis, optimal management strategies for those with or at high risk for coronary artery disease (CAD) are important. Since hypoglycemia may exacerbate myocardial ischemia and may cause dysrhythmias (111), it follows that medications that predispose patients to this adverse effect should be avoided, if possible. If they are required, however, to achieve glycemic targets, patients should be educated to minimize risk. Because of possible effects on potassium channels in the heart, certain sulfonylureas have been proposed to aggravate myocardial ischemia through effects on ischemic preconditioning (112), but the actual clinical relevance of this remains unproven. Metformin may have some cardiovascular benefits and would appear to be a useful drug in the setting of CAD, barring prevalent contraindications (32). In a single study, pioglitazone was shown to reduce modestly major adverse cardiovascular events in patients with established macrovascular disease. It may therefore also be considered, unless heart failure is present (60). In very preliminary reports, therapy with GLP-1 receptor agonists and DPP-4 inhibitors has been associated with improvement in either cardiovascular risk or risk factors, but there are no long-term data regarding clinical outcomes (113). There are very limited data suggesting that AGIs (114) and bromocriptine (115) may reduce cardiovascular events. Heart failure. With an aging population and recent decreases in mortality after myocardial infarction, the diabetic patient with progressive heart failure is an increasingly common scenario (116). This population presents unique challenges given their polypharmacy, frequent hospitalizations, and contraindications to various agents. TZDs should be avoided (117,118). Metformin, previously contraindicated in heart failure, can now be used if the ventricular dysfunction is not severe, if patient's cardiovascular status is stable, and if renal function is normal (119). As mentioned, cardiovascular effects of incretin-based therapies, including those on ventricular function, are currently under investigation (120). Chronic kidney disease. Kidney disease is highly prevalent in type 2 diabetes, and moderate to severe renal functional impairment (eGFR <60 mL/min) occurs in approximately 20–30% of patients (121,122). The individual with progressive renal dysfunction is at increased risk for hypoglycemia, which is multifactorial. Insulin and, to some degree, the incretin hormones are eliminated more slowly, as are antihyperglycemic drugs with renal excretion. Thus, dose reduction may be necessary, contraindications need to be observed, and consequences (hypoglycemia, fluid retention, etc.) require careful evaluation. Current U.S. prescribing guidelines warn against the use of metformin in patients with a serum creatinine ≥133 mmol/L (≥1.5 mg/dL) in men or 124 mmol/L (≥1.4 mg/dL) in women. Metformin is eliminated renally, and cases of lactic acidosis have been described in patients with renal failure (123). There is an ongoing debate, however, as to whether these thresholds are too restrictive and that those with mild–moderate renal impairment would gain more benefit than harm from using metformin (124,125). In the U.K., the National Institute for Health and Clinical Excellence (NICE) guidelines are less proscriptive and more evidence-based than those in the U.S., generally allowing use down to a GFR of 30 mL/min, with dose reduction advised at 45 mL/min (14). Given the current widespread reporting of estimated GFR, these guidelines appear very reasonable. Most insulin secretagogues undergo significant renal clearance (exceptions include repaglinide and nateglinide) and the risk of hypoglycemia is therefore higher in patients with chronic kidney disease (CKD). For most of these agents, extreme caution is imperative at more severe degrees of renal dysfunction. Glyburide (known as glibenclamide in Europe), which has a prolonged duration of action and active metabolites, should be specifically avoided in this group. Pioglitazone is not eliminated renally, and therefore there are no restrictions for use in CKD. Fluid retention may be a concern, however. Among the DPP-4 inhibitors, sitagliptin, vildagliptin, and saxagliptin share prominent renal elimination. In the face of advanced CKD, dose reduction is necessary. One exception is linagliptin, which is predominantly eliminated enterohepatically. For the GLP-1 receptor agonists exenatide is contraindicated in stage 4–5 CKD (GFR <30 mL/min) as it is renally eliminated; the safety of liraglutide is not established in CKD though pharmacokinetic studies suggest that drug levels are unaffected as it does not require renal function for clearance. More severe renal functional impairment is associated with slower elimination of all insulins. Thus doses need to be titrated carefully, with some awareness for the potential for more prolonged activity profiles. Liver dysfunction. Individuals with type 2 diabetes frequently have hepatosteatosis as well as other types of liver disease (126). There is preliminary evidence that patients with fatty liver may benefit from treatment with pioglitazone (45,127,128). It should not be used in an individual with active liver disease or an alanine transaminase level above 2.5 times the upper limit of normal. In those with steatosis but milder liver test abnormalities, this insulin sensitizer may be advantageous. Sulfonylureas can rarely cause abnormalities in liver tests but are not specifically contraindicated; meglitinides can also be used. If hepatic disease is severe, secretagogues should be avoided because of the increased risk of hypoglycemia. In patients with mild hepatic disease, incretin-based drugs can be prescribed, except if there is a coexisting history of pancreatitis. Insulin has no restrictions for use in patients with liver impairment and is indeed the preferred choice in those with advanced disease. Hypoglycemia. Hypoglycemia in type 2 diabetes was long thought to be a trivial issue, as it occurs less commonly than in type 1 diabetes. However, there is emerging concern based mainly on the results of recent clinical trials and some cross-sectional evidence of increased risk of brain dysfunction in those with repeated episodes. In the ACCORD trial, the frequency of both minor and major hypoglycemia was high in intensively managed patients—threefold that associated with conventional therapy (129). It remains unknown whether hypoglycemia was the cause of the increased mortality in the intensive group (130,131). Clearly, however, hypoglycemia is more dangerous in the elderly and occurs consistently more often as glycemic targets are lowered. Hypoglycemia may lead to dysrhythmias, but can also lead to accidents and falls (which are more likely to be dangerous in the elderly) (132), dizziness (leading to falls), confusion (so other therapies may not be taken or taken incorrectly), or infection (such as aspiration during sleep, leading to pneumonia). Hypoglycemia may be systematically under-reported as a cause of death, so the true incidence may not be fully appreciated. Perhaps just as importantly, additional consequences of frequent hypoglycemia include work disability and erosion of the confidence of the patient (and that of family or caregivers) to live independently. Accordingly, in at-risk individuals, drug selection should favor agents that do not precipitate such events and, in general, blood glucose targets may need to be moderated. FUTURE DIRECTIONS/RESEARCH NEEDS For antihyperglycemic management of type 2 diabetes, the comparative evidence basis to date is relatively lean, especially beyond metformin monotherapy (70). There is a significant need for high-quality comparative-effectiveness research, not only regarding glycemic control, but also costs and those outcomes that matter most to patients—quality of life and the avoidance of morbid and life-limiting complications, especially CVD (19,23,70). Another issue about which more data are needed is the concept of durability of effectiveness (often ascribed to β-cell preservation), which would serve to stabilize metabolic control and decrease the future treatment burden for patients. Pharmacogenetics may very well inform treatment decisions in the future, guiding the clinician to recommend a therapy for an individual patient based on predictors of response and susceptibility to adverse effects. We need more clinical data on how phenotype and other patient/disease characteristics should drive drug choices. As new medications are introduced to the type 2 diabetes pharmacopeia, their benefit and safety should be demonstrated in studies versus best current treatment, substantial enough both in size and duration to provide meaningful data on meaningful outcomes. It is appreciated, however, that head-to-head comparisons of all combinations and permutations would be impossibly large (133). Informed judgment and the expertise of experienced clinicians will therefore always be necessary.
                Bookmark

                Author and article information

                Journal
                Journal of General Internal Medicine
                J GEN INTERN MED
                Springer Science and Business Media LLC
                0884-8734
                1525-1497
                January 2019
                November 7 2018
                January 2019
                : 34
                : 1
                : 75-81
                Article
                10.1007/s11606-018-4731-0
                © 2019

                http://www.springer.com/tdm

                Comments

                Comment on this article