Gupta’s team tested their method on a population of 1,000 Medicaid patients. In a simulation using data from a partner hospital, they selected 200 people to participate in case management, a particular intervention where patients are partnered with a team of social workers, nurses, and physicians who coordinate their care.
The researchers’ goal was to test their robust optimization method against “scoring rules,” which are current practice for managing resource allocation problems. Scoring rules help practitioners assign each individual in the candidate population a score, and those with the highest scores are targeted for treatment. Gupta’s team wanted to know if its model would outperform scoring rules in identifying optimum candidates, and if so, why.
They found that scoring rules work well when the treatment is benign and/or the patient base is nearly homogeneous. “However,” they warn, “scoring rules can perform arbitrarily badly when the treatment is potentially harmful. In addition, as heterogeneity in the sample increases, scoring rules can be worse than not targeting at all.”
That finding is particularly disturbing, Gupta said. “Allocating interventions in a bad way might actually be worse than doing nothing to address the problem.”
By contrast, Gupta’s method for maximizing effectiveness performs nearly as well as scoring rules when heterogeneity is small, and much better than scoring rules as heterogeneity increases. Most importantly, unlike current practice, it is never worse than doing nothing.