Skip to main content

Assessing the Quality of Individual Estimates in Decision Support Systems

Assessing the quality of individual estimates in decision support systems. Medical decision support tools are increasingly available on the Internet and are being used by lay persons as well as health care professionals. The goal of some of these tools is to provide an "individualized" prediction of future health care related events such as prognosis in breast cancer given specific information about the individual. These tools are usually based on models synthesized from data with a fine granularity of information. Under the umbrella of "personalized" medicine, these individualized prognostic assessments are sought as a means to replace general prognostic information given to patients with specific probability estimates that pertain to a small stratum to which the patient belongs, and ultimately specifically to each patient (i.e., a stratum with n=1). Subsequently, these estimates are used to inform decision making and are therefore of critical importance for public health.

Responsible utilization of prognostic models for patient counseling and medical decision making requires thorough model validation. Verification that the estimated or predicted event probabilities reflect the underlying true probability for a particular individual (i.e, verifying the calibration of the prognostic model) is a critical but often overlooked step in evaluation, which usually favors the verification of the discriminatory ability of the model. Selection of the best predictive model for a given problem should be based on robust comparison that takes into account errors in individual predictions, calibration, and discrimination indices. A robust test for comparison of calibration across different models does not currently exist.

Our specific aims are to:

  1. Characterize the main deficiencies of existing calibration indices in the context of individualized predictions and develop a new model-independent calibration index and comparison test that can be used to assess and compare predictive models based on both statistical regression and machine learning methods;
  2. Unify the theories on decomposition of error into discrimination and calibration components stemming from the statistical and machine learning communities to derive a refined measure of calibration that can be calculated from measures of error and discrimination. We will compare the performance of the new methods with existing ones in different predictive models derived from real clinical data related to different medical domains.

In addition to DBMI members (Lucila Ohno-Machado, M.D., Ph.D, Jihoon Kim, Staal Vinterbo), the following collaborators are involved:

  • Concha Bielza, Pedro Larranaga, and Victor Robles, from the Politechnic Institute of Madrid
  • Stephan Dreiseit, from the Univ. Hagenberg
  • Johan Lundin, from the Univ. Helsinki
  • Michael Matheny, from Vanderbilt
  • Catherine Racowski, from Brigham and Women's Hospital
  • Fred Resnic, from BWH

This project is funded by the National Library of Medicine, NIH.