Government authorities are increasingly using algorithms and machine learning technologies to conduct government business, allowing algorithms to make decisions on everything from assigning students to magnet schools, to allocating police resources, setting bail and distributing social welfare benefits. While algorithms promise to make government function more effectively, their growing use presents significant issues that policymakers have yet to address:
- Algorithms can make mistakes, either because they are poorly conceived or due to coding errors. Improperly functioning government algorithms can cause serious harm, such as by wrongly depriving people of health benefits or incorrectly identifying them as criminals.
- Algorithms can amplify pre-existing biases by being trained on biased historic data. Biased algorithms have been shown, for example, to increase the racially disparate impact of policing and cause the disproportionate removal of children from poorer parents.
- Algorithms are unaccountable. Agencies acquire algorithms without fully understanding how they function or assessing their reliability, and then often fail to test their reliability in use. Deficiencies in current disclosure laws make it impossible for the public to know if government algorithms are functioning properly or to identifying sources of ineffectiveness or bias.
- Algorithms can make government less accountable. Without informed oversight of the use of algorithms, humans can at once offload responsibility for their failures onto the algorithm, while also gaining unwarranted credibility due to the algorithm’s perceived power of analysis.
These concerns are very real in Connecticut today, as confirmed by our efforts over the past year to assess the reliability and accountability of algorithms used by three State agencies, the Department of Children and Families (DCF), Department of Education (DOE), and Department of Administrative Services (DAS). Responses to Freedom of Information (FOI) requests confirmed both that existing disclosure requirements are insufficient to allow meaningful public oversight of the use of algorithms, and that agencies do not adequately assess the effectiveness and reliability of algorithms, either at the time of acquisition or after implementation. The FOIA responses generally revealed that agencies are insufficiently aware of the potential problems posed by their algorithms and unconcerned about the lack of transparency.
- DCF provided the only complete FOIA response, producing documents on its use of an algorithm intended to reduce the incidence of children suffering a life-threatening episode. It disclosed basic information about the algorithm but not its source code, which DCF did not possess and claimed to be protected as a trade secret. The production indicated that DCF had not performed a robust evaluation of the algorithm’s efficacy or bias before implementing it or during the three years it was used.
- DOE made a partial production concerning its use of an algorithm to assign students to schools, an issue that has raised substantial disparate racial impact questions in the past. DOE’s disclosure did not reveal how its school-assignment algorithm worked, apart from noting that it implemented the “Gale-Shapley deferred acceptance algorithm” and had no mechanism allowing parents to challenge its determinations. DOE refused to produce source code documenting operation of its algorithm on trade secret grounds, and provided no records related to its acquisition of the algorithm, beyond the procurement announcement and the contract (disclosing that it cost over $650,000 to acquire). DOE’s incomplete production revealed no effort to evaluate the algorithm’s efficacy or bias.
- DAS provided no documents in response to our request for information concerning a new algorithm used in hiring state employees and contractors. The agency did claim in a phone call that many of the requested documents would be withheld – we believe erroneously – under various FOIA exemptions, but despite a half-dozen follow-ups over a period of several months, the agency failed to produce any documents at all.
The problems with algorithmic accountability confirmed by this FOIA experiment call out for a legislative response. Several options have been implemented or seriously considered elsewhere, such as requiring: agencies to assess the effectiveness and bias of an algorithm, affirmative public disclosures about algorithms used to conduct government business, waiver of trade secret protection in certain circumstances, and mandatory disclosures to individuals subject to algorithmic decisions.
While the details of such approaches require study, it is imperative that steps are undertaken now to identify an effective response to the current lack of algorithmic accountability. The potential for serious harm to be inflicted by malfunctioning or biased algorithm is too serious to ignore.