
This academic article critiques the widespread deployment of "predictive optimization" algorithms, which use machine learning to make decisions about individuals based on future predictions. The authors argue that despite claims of accuracy, efficiency, and fairness, these systems inherently fail on their own terms due to seven recurring shortcomings. These issues include inability to translate predictions into optimal interventions, mismatches between intended outcomes and measurable data, biased training data, limitations in predicting social outcomes, unavoidable disparate performance across groups, lack of effective contestability, and vulnerability to strategic manipulation (Goodhart's Law). The research analyzes eight real-world case studies across diverse domains like criminal justice and healthcare, demonstrating that these critiques apply broadly and are not easily resolved through minor design changes, ultimately challenging the legitimacy of such deployments. The paper concludes by providing a rubric of critical questions for assessing these systems and advocating for alternative decision-making approaches.