Resisting the Machine: Learning a New Method
Grant Glass, University of North Carolina at Chapel Hill Prepared for MLA 2019
On December 20th, Congress Passed the First Step Act that included an expansion of the federal safety valve, providing judges with the ability to make an exception to mandatory minimum sentences for nonviolent drug offenders (Rascoe). At first glance, this sounded like real reform, but hidden within the bill was another more nefarious expansion—the use of algorithms. Specifically, the use machine learning algorithms (MLA) to determine which inmates can use earned time credits to reduce their sentence (Lopez). Defenders of the algorithms often site fairness1 and studies on judicial biases to justify their use2, claiming they are less bias than humans. The question that arises out of this debate is whether or not these MLAs are biased and how do we determine that bias?
To begin to unpack that question, I point to an article titled, “Machine Bias” published on May 23rd, 2016 in ProPublica which told the stories of; Brisha Borden and Vernon Prater, Dylan Fugett and Bernard Parker. We see in these stories Brisha and Bernard faced harsher sentences for the same crime than their white counterparts, all because of an MLA. Northpointe (now renamed Equivant)3 developed the algorithm, straightforwardly called Correctional Offender Management Profiling for Alternative Sanctions or COMPAS. Essentially the MLA predicted the likelihood of an offender committing a crime they had yet to commit, providing the judge with a score from 1-10, with one least likely to re-offend and 10, most likely. Their score is based on some demographic data and their responses to a 137-question survey, which includes
1 See Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).
2 See
Posner, Eric A. “Does political bias in the judiciary matter?: Implications of judicial bias studies for legal and
constitutional reform.” The University of Chicago Law Review 75.2 (2008): 853-883.
questions like “How many of your friends have been arrested?” Or “How old were you when your parents separated?” Defenders note that race is not used in the calculation, but it shows up in the score. According the ProPublica, this algorithm is used in over nine states, so its use is not limited to a few states or a few cases (Angwin et al.). It is doing real damage right now at scale.
These MLAs have already infiltrated beyond the criminal justice system, often times serving a majority of internet content through mostly unsupervised means; the computational process often kept opaque by technical literacy. Scholars like David Berry have advocated for humanists to engage with and understand these MLAs (84), while others like Alexander Galloway argue that we should withdraw from trying to understand complex systems like MLAs, but instead pursue “the very questions that technoscience has bungled” (128). Rather than seeing computational criticism as a binary between these valid approaches, I advocate for a hybrid approach to MLAs by constructing models that interrupt the computer science and statistician focused notions of efficiency and replace them with a more human-centered approach to algorithmic knowledge production. By simultaneously engaging in debates of deploying MLAs morally while advocating of the creation of human-centric models of the world that provide an alternative and a critique of the machine generated ones.
What ways can the humanities begin to engage in this work? The first way is that we change the conversation. The advocates for these algorithmic approaches use statistics, which in the case of COMPAS, claims that the same risk score no matter what race will equal the same chance of recidivism. In an article refuting the ProPublica conclusions points to “a mathematical limit to how fair any algorithm — or human decision-maker — can ever be” (Corbett-Davies et al.). What if we are measuring this wrong? What if the statistics, no matter what, will always reproduce racism? Where do we go from here? Algorithms are designed to do a task, we just need to change how that task is accomplished. What the story of COMPAS tells us is that the
humanistic intervention can be made through considering the stories of Brisha and Bernard in the data. We advocate that their voices are heard in all the data noise. We do this by ensuring that the datasets in which these algorithms rely upon reflect myriad lives and experiences, not just the ones most easily acquired. Then, instead of relying upon solely statistics to determine outcomes, we need humanistic perspectives. MLAs are here and all around us, but we can at least help shape their use.
Works Cited:
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias” ProPublica, 23 May. 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-crim… sentencing. Accessed 31 December 2018.
Berry, David M. “Prolegomenon to a Media Theory of Machine Learning: Compute-Computing and Compute-Computed.” Media Theory 1.1 2017: 74-87.
Corbett-Davies, Emma Pierson, Avi Feller, and Sharad Goel. “A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear”Washington Post, 17 October. 2016, https://www.washingtonpost.com/news/monkey- cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than- propublicas/?noredirect=on&utm_term=.0af87b0cfc18. Accessed 31 December 2018.
Duwe, Grant, and KiDeuk Kim. “Out with the old and in with the new? An empirical comparison of supervised learning algorithms to predict recidivism.” Criminal Justice Policy Review 28.6 (2017): 570-600.
Galloway, Alex. “The cybernetic hypothesis.” differ- ences, 25.1 2014: 107-31.
Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair
determination of risk scores.” arXiv preprint arXiv:1609.05807 2016.
Lopez, German. “The Senate just passed criminal justice reform” Vox, 19 December. 2018, https://www.vox.com/future-perfect/2018/12/18/18140973/first-step-act-cr… senate-congress. Accessed 31 December 2018.
Posner, Eric A. “Does political bias in the judiciary matter?: Implications of judicial bias studies for legal and constitutional reform.” The University of Chicago Law Review 75.2 2008: 853-883.
Rascoe, Ayesha. “Bipartisan Criminal Justice Bill Closer To Becoming Law After Congressional Approval.” National Public Radio, 18 December. 2018, https://www.npr.org/2018/12/18/677372822/hold-bipartisan-criminal-justic… becoming-law-after-senate-approv. Accessed 31 December 2018.