Comparison Benchmarks

The NRDR quality databases have been in existence since 2008 and provide comparison benchmarks, comparing facilities and physicians to the database as a whole, and to other similar facilities. Some of the measures included on our submission list have been in use since early 2008 (e.g. CTC True Positive Rate, and CTC Clinically Significant Extracolonic Findings Rate). To date, CMS has approved 24 Non-MIPS measures many of which have been in use since mid-2011. In early 2017, the Society for Interventional Radiology (SIR) and the American College of Radiology launched the new Interventional Radiology Registry to promote quality of care for patients undergoing interventional radiology procedures.


All our registry reports contain comparisons to all facilities in the registry. Most reports also contain comparisons to similar facilities, such as facilities of the same type, in similar locations, and in the same geographic region.NMD Facility Report - Histogram and Box-and-Whiskers


Starting in late 2017 we will present comparisons in the form of deciles of performance rates in which a physician falls, to mirror the methodology used by CMS.



Normative Benchmarks

Registry data in the Dose Index Registry have been used to develop size-specific benchmarks, by CT exam, for radiation exposure and are published and available for use by all physician practices. These benchmarks are also being used in the definition of new registry measures, such as percent of CT chest exams without contrast that perform at or better than benchmark on Dose Length Product.DIR Exec Summary – 10 High Volume Exams



Risk Adjustment of Quality Measures

Currently the NRDR does not report risk-adjusted measures. We plan to start developing risk-adjustment models in the next year. However, we use a number of other strategies to help maintain comparability:

  • We specify the standards fairly narrowly to maintain comparability to peers. For example, for dose index measures, we map all exam names to a standard lexicon so that exams are compared to similar protocols at peer facilities. All measurements are standardized to the same phantom. Radiation dose indices may be justifiably different for patients of different sizes. For body exams, we report size-specific dose estimates that estimate doses to patients after adjusting the scanner output for patient size. This provides reasonable comparisons across facilities.

  • A number of our measures are screening measures, and screen asymptomatic patients. To a large extent, this mitigates the need for risk adjustment because screening populations tend to be similar across patients. In addition, NRDR feedback reports provide information to help facilities meaningfully compare themselves to peers most similar to them.

  • We provide demographic distributions of patients for mammography measures. The measures are for a screening mammography population that tends to be large, and screening populations (women age 40 and older) tend to not differ too much in distribution across facilities. NRDR feedback reports provide demographic comparisons so that facilities can examine whether patient characteristics may explain the facility’s deviation from registry averages.

  • For all registries, we compare facilities to other facilities with similar characteristics, such as same type (e.g. academic/community), similar location (e.g. metropolitan/rural/suburban), and same geographic region.  Patient populations in similar locations may be more similar than patient populations nationwide, and comparisons to narrower peer groups provide better comparisons.



Examples of Comparison Benchmarks in Use in NRDR

Sample reports illustrating peer comparisons are posted on the NRDR website.