11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Comparison of Provisional with Final Notifiable Disease Case Counts — National Notifiable Diseases Surveillance System, 2009

      research-article
      , DrPH , PhD
      MMWR. Morbidity and Mortality Weekly Report
      U.S. Centers for Disease Control

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          States report notifiable disease cases to CDC through the National Notifiable Diseases Surveillance System (NNDSS). This allows CDC to assist with public health action and monitor infectious diseases across jurisdictional boundaries nationwide. The Morbidity and Mortality Weekly Report (MMWR) is used to disseminate these data on infectious disease incidence. The extent to which the weekly notifiable conditions are overreported or underreported can affect public health understanding of changes in the burden, distribution, and trends in disease, which is essential for control of communicable diseases (1). NNDSS encourages state health departments to notify CDC of a case when initially reported. These cases are included in the weekly provisional counts. The status of reported cases can change after further investigation by the states, resulting in differences between provisional and final counts. Increased knowledge of these differences can help in guiding the use of information from NNDSS. To quantify the extent to which final counts differ from provisional counts of notifiable infectious disease in the United States, CDC analyzed 2009 NNDSS data for 67 conditions. The results of this analysis demonstrate that for five conditions, final case counts were lower than provisional counts, but for 59 conditions, final counts were higher than provisional counts. The median difference between final and provisional counts was 16.7%; differences were ≤20% for 39 diseases but >50% for 12. These differences occur for various diseases and in all states. Provisional case counts should be interpreted with caution and an understanding of the reporting process. Reporting of cases of certain diseases is mandated at the state or local level, and states, the Council of State and Territorial Epidemiologists (CSTE), and CDC establish policies and procedures for submitting data from these jurisdictions to NNDSS. Not all notifiable diseases are reportable at the state level, and although disease reporting is mandated by legislation or regulation, state reporting to CDC is voluntary. States send reports of cases of nationally notifiable diseases to CDC on a weekly basis in one of several standard formats. Amended reports can be sent, as well as new reports. Cases are reported by week of notification to CDC. Cases reported each week to CDC and published in MMWR are deemed provisional. The NNDSS database is open throughout the year, allowing states to update their records as new information becomes available. Annually, CDC provides each state epidemiologist with a cutoff date (usually 6 months after the end of the reporting year) by which all records must be reconciled and no additional updates are accepted for that reporting period. After the database is closed, final case counts, prepared after the states have reconciled the year-to-date data with local reporting units, are approved by state epidemiologists as accurate reflections of final case counts for the year and are published in the MMWR Summary of Notifiable Diseases — United States. Data for 2009 were published in 2011 (2). CDC’s publication schedule allows states time to complete case investigation tasks. To examine the extent that provisional counts of infectious diseases differ from final counts, CDC compared the cumulative case counts published for week 52 of 2009 in the MMWR of January 8, 2010 to the case counts published in the NNDSS final data set for 2009 (cutoff date of June 2010) published in MMWR on August 20, 2010. To assess whether discrepancies between provisional and final counts were more common in specific states or regions, or everywhere, reporting was examined, by state, of four diverse diseases: one sexually transmitted disease (Chlamydia trachomatis, genital infection), one vaccine-preventable disease (pertussis), one foodborne disease (salmonellosis), and one vectorborne disease (Lyme disease). Data are not presented for tuberculosis and human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome because these data are published quarterly rather than weekly in MMWR. Weekly reports of these conditions to the public health community are of limited value because of differences in reporting patterns for these diseases, and long-term variations in the number of cases are more important to public health practitioners than weekly variations (3). Reported data for 67 notifiable diseases were reviewed. Final counts were lower than provisional counts for five diseases, the same as provisional counts for three, and higher for 59 (Table 1). The median difference between final and provisional counts was 16.7%; differences were ≤20% for 39 diseases but >50% for 12. Among diseases with ≥10 cases reported in 2009, final counts were lower than provisional counts for just four: invasive Haemophilus influenzae disease, ages <5 years, unknown serotype (final: 166, provisional: 218); acute hepatitis C (final: 782, provisional: 844); toxic-shock syndrome, other than streptococcal (final: 74, provisional: 76); and influenza-associated pediatric mortality (final: 358, provisional: 360). Final counts were higher than provisional counts for 51 diseases. The greatest percentage differences between provisional and final case counts were for arboviral disease, West Nile virus (neuro/nonneuro) (final: 720, provisional: 0); mumps (final: 1,991, provisional: 982); and Hansen disease (final: 103, provisional: 59). Examining four diverse but commonly reported diseases in detail revealed no consistent association between state or region and the magnitude of the discrepancy between final and provisional counts (Table 2). For Chlamydia trachomatis, genital infections, the final case count was 13.1% higher than the provisional count nationally; it was <2% lower everywhere and ≥20% higher in six states. Two states, Arkansas and North Carolina, reported no cases provisionally, but reported final case counts of 14,354 and 41,045, respectively. For Lyme disease, the final case count was 29.2% higher than the provisional count nationally. Only 23 jurisdictions reported >100 cases, including 21 states, upstate New York, and New York City. Of these, four states reported a final count lower than their provisional count (range: 13.4%–29.2%) and eight jurisdictions reported final counts ≥20% higher. The greatest percentage differences between provisional and final case counts were in Connecticut (final: 4,156, provisional: none), Minnesota, (final: 1,543, provisional: 169), Texas (final: 276, provisional: 48), and New York City (final: 1,051, provisional: 262). For pertussis, the final case count was 24.8% higher than the provisional count nationally; it was <2% lower everywhere and ≥20% higher in 18 states and the District of Columbia (DC). Of the five states that reported >1,000 cases, the states with the greatest percentage differences between provisional and final counts were Minnesota (final: 1,121, provisional: 165) and Texas (final: 3,358, provisional: 2,437). For salmonellosis, the final case count was 10.6% higher than provisional count nationally. Six states reported a final count lower than their provisional count (range: 0.1%–2.9%) and nine states plus DC reported final counts ≥20% higher, the highest being DC (final: 100, provisional: 26), Louisiana (final: 1,180, provisional: 599), and Indiana (final: 629, provisional: 349). Editorial Note The findings in this report corroborate previous observations that provisional NNDSS data should be interpreted with caution (1,4,5). The primary appeal of provisional counts is timeliness; in comparison, final counts are more complete and accurate. As additional information is collected during investigations, final case counts might be higher or lower than the provisional counts. Local and state health departments collect reportable surveillance data primarily to assist with disease control and prevention efforts (i.e., to monitor local outbreaks of infectious diseases), to measure disease burden among high-risk populations, and to assess effectiveness of local interventions. At the national level, these data can be compared with baseline data to detect unusual disease occurrences. Final data sets are useful in monitoring national trends and for determining the effectiveness of national intervention efforts. In 2009, final case counts did not differ from end-of-year provisional counts by >20% for two thirds of the 67 notifiable diseases examined. Understanding how provisional counts relate to final counts is essential for interpreting provisional data (6,7). What is already known on this topic? Provisional counts of notifiable diseases usually differ from final counts; they are most often lower. What is added by this report? In 2009, finalized case counts were higher than the provisional case counts for 59 of 67 notifiable diseases. The median difference between final and provisional counts was 16.7%; differences were ≤20% for 39 diseases but >50% for 12. These differences occur, to a greater or lesser extent, for a wide variety of diseases and in all states. What are the implications for public health practice? Notifiable disease data are subject to case reclassification leading to undernotification or overnotification. Provisional case counts should be interpreted with caution because of the reporting process. The primary appeal of provisional counts is timeliness; in comparison, final counts are more complete and accurate. Final counts might be higher than provisional counts for several possible reasons: 1) as amended records are sent by states during the notification process, cases might be reclassified among confirmed, probable, suspected, and not-a-case categories; 2) states vary in their practices regarding when they report cases with incomplete data or that are under investigation, leading to variable delays; 3) allocation of cases to a state can be delayed; 4) laboratory testing, case investigation, and data entry can be delayed as a result of temporary staff absences (e.g., leave, furlough, or turnover); 5) states sometimes delay sending some reports to CDC until the end of the year; and 6) internal CDC data processing problems can cause a discrepancy. The findings in this report are subject to at least one limitation. It was impossible to determine when final counts were known to the state and local jurisdictions so that they could take public health action. This report focuses only on counts published in MMWR. The jurisdictions might have been aware of final case counts sooner, and only notification to CDC was delayed. Although this study examined 1 year of data, previous research using multiple years of data for hepatitis A and B concluded that provisional data generally tend to underrepresent the final data counts for those conditions (1). The addition of more years to the current research, which examined multiple notifiable conditions and documents substantial differences across states, regions, and numerous conditions, would not be expected to change the overall results. Interpreting weekly incidence data is complex because of surveillance system limitations. Nonetheless, health practitioners have to respond to public health threats based on preliminary surveillance information. In 2006, CDC and CSTE reconsidered data presentation formats and included additional information (e.g., 5-year weekly average, previous 52 weeks median, and maximum number of cases) to aid interpreting these data (3). However, the findings in this report illustrate that major challenges still exist in presenting and interpreting provisional data and highlights the need to examine specific factors that can contribute to late reporting of cases (e.g., late case reporting by providers to health departments or late reporting of cases by health departments to CDC) (4). Although information technology has improved notifiable disease reporting (8), NNDSS data remain subject to reporting artifacts. Understanding specific reasons for the variation between the provisional and final case counts for each condition can improve the use of provisional data for disease surveillance and notification.

          Related collections

          Most cited references8

          • Record: found
          • Abstract: found
          • Article: not found

          Detection of aberrations in the occurrence of notifiable diseases surveillance data.

          The detection of unusual patterns in the occurrence of diseases and other health events presents an important challenge to public health surveillance. This paper discusses three analytic methods for identifying aberrations in underlying distributions. The methods are illustrated on selected infectious diseases included in the National Notifiable Diseases Surveillance System of the Centers for Disease Control. Results suggest the utility of such an analytic approach. Further work will determine the sensitivity of such methods to variations in the occurrence of disease. These methods are useful for evaluating and monitoring public health surveillance data.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Evaluation of a method for detecting aberrations in public health surveillance data.

            The detection of unusual patterns in routine public health surveillance data on diseases and injuries presents an important challenge to health workers interested in early identification of epidemics or clues to important risk factors. Each week, state health departments report the numbers of cases of about 50 notifiable diseases to the Centers for Disease Control and Prevention, and these reports are published weekly in the Morbidity and Mortality Weekly Report. A new analytic method and a horizontal bar graph were introduced in July 1989 to facilitate easy identification of unusual numbers of reported cases. Evaluation of the statistical properties of this method indicates that the results are fairly robust to nonnormality and serial correlation of the data. An epidemiologic evaluation of the method after the first 6 months showed that it is useful for detection of specific types of aberrations in public health surveillance.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A review of strategies for enhancing the completeness of notifiable disease reporting.

              Notifiable disease surveillance systems provide essential data for infectious disease prevention and control programs at the local, state, and national levels. Given that reporting completeness is known to vary considerably, this review identifies methods that can reliably enhance completeness of reporting. These surveillance-related activities include initiating active surveillance when appropriate; implementing automated, electronic laboratory-based reporting; strengthening ties with clinicians and other key partners in notifiable disease reporting; and increasing the use of laboratory diagnostic tests in identifying new cases. Despite ample data in support of these strategies, notifiable disease surveillance continues to receive insufficient attention and resources. Recent attention to public health preparedness provides an opportunity to strengthen notifiable disease surveillance and enhance completeness of reporting.
                Bookmark

                Author and article information

                Journal
                MMWR Morb Mortal Wkly Rep
                MMWR Morb. Mortal. Wkly. Rep
                MMWR
                MMWR. Morbidity and Mortality Weekly Report
                U.S. Centers for Disease Control
                0149-2195
                1545-861X
                13 September 2013
                13 September 2013
                : 62
                : 36
                : 747-751
                Affiliations
                Div of Notifiable Diseases and Healthcare Information, Public Health Surveillance and Informatics Program Office
                Div of Viral Hepatitis, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, CDC
                Author notes
                Corresponding contributor: Nelson Adekoya, nba7@ 123456cdc.gov , 404-498-6258.
                Article
                747-751
                4585575
                24025757
                c1cfe293-7d15-46be-b2ea-d47c01c5b7a9
                Copyright @ 2013

                All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated.

                History
                Categories
                Articles

                Comments

                Comment on this article