It’s time for Connecticut to Get Control of Its Algorithms

By Mitchell W. Pearlman

A headline in the New York Times a few months ago reads: “Black Americans Are Much More Likely to Face Tax Audits, Study Finds.”[1] The story reports that Black taxpayers are at least three times as likely to be audited by the Internal Revenue Service as other taxpayers, according to a study from Stanford University, the University of Michigan, the University of Chicago and the Treasury Department.

Why did the study reach this conclusion? It wasn’t because of bias from individual tax enforcement agents. Nor was it because Black Americans engage in more tax evasion than others. The study found it was because the computer algorithms that the IRS uses to determine who is selected for an audit flags the tax returns in which certain government credits are reported. A significant number of Blacks tend to avail themselves of those credits.

This is yet another instance in an ever-expanding series of exposes in which government computer algorithms have proven to be error-prone, racially biased and, in some cases, downright dangerous to people’s health, welfare and safety. For example, algorithms that use flawed data or assumptions can disproportionately target minorities, low-income families and disabled people. They can lead agencies to make devastating decisions on removing children from homes. They can wrongfully disapprove health, housing and other benefits. And they can determine where to concentrate police activity and even where to assign children to schools. These concerns led the Connecticut Advisory Committee to the U.S. Commission on Civil Rights to recommend that Connecticut lawmakers pass laws to regulate the use of such government computer systems.[2] It’s time for the legislature to act before more harm can be done.

Yale Law School’s Media and Information Access Clinic (MFIA) studied the issue in Connecticut and produced a telling report.[3] The study states: “[G]overnment authorities are increasingly using algorithms and machine learning technologies to conduct government business, allowing algorithms to make decisions on everything from assigning students to magnet schools, to allocating police resources, setting bail and distributing social welfare benefits. While algorithms promise to make government function more effectively, their growing use presents significant issues that policymakers have yet to address….”

Here are some of the key findings of MFIA’s Connecticut study:

  1. Algorithms make mistakes, either because they are poorly conceived or due to coding errors.
  2. Algorithms amplify pre-existing biases by being “trained” (by Artificial Intelligence) on biased historic data. Algorithms are unaccountable. Agencies acquire algorithms without fully understanding how they function or assessing their reliability, and then often fail to test their reliability in use.
  3. Deficiencies in current disclosure laws make it impossible for the public to know if government algorithms are functioning properly or identifying sources of ineffectiveness or bias.

So, what can Connecticut do to address these concerns? Well, fortunately another New England state has recently enacted legislation that provides a good model that Connecticut can easily adapt to its own systems and institutions of government. As appropriately modified for Connecticut’s governmental structure, this legislation would be a great first step in gaining control of the state’s computer algorithms and automated decision systems.

Vermont Act 132 (H.410), “An Act relating to the Use and Oversight of Artificial Intelligence in State Government”[4] took effect on July 1, 2022. Among other things, the law uses commonly understood definitions of important terms.[5]

The law requires the applicable state digital services agency to “conduct a review and make an inventory of all automated decision systems that are being developed, employed, or procured by State government.” For each automated decision system, the inventory must include (among other things):

  1. The system’s name and vendor.
  2. A description of the system’s general capabilities, including whether the system is used or may be used for decision-making without human review and what the impact of those decisions may have on state residents.
  3. The type of data inputs that the technology uses and how that data are generated, collected, and processed.
  4. Whether the system has been tested for bias by an independent third party, whether it has any known bias, or whether it remains untested for bias.
  5. A description of the purpose and proposed use of each automated decision system, including what decisions it will be used to make and whether it is an automated final decision system or an automated decision system that merely supports a decision made by human beings.

In addition, Vermont’s law requires the responsible state agency to submit reports to the legislature on the automated decision system inventories required by the law.

Importantly, the law also establishes an Artificial Intelligence Advisory Council comprised of government officials, academics, lawyers, ethics and human rights experts and members of nongovernmental organizations. To the extent possible, advisory council members are to be drawn from diverse backgrounds and have experience with artificial intelligence. The council is to advise the state’s agency responsible for implementing the legislation, report to the legislature on its doings and findings, and engage in public outreach and education.

Vermont’s law provides an exceptional blueprint for Connecticut to consider and adapt as a first step in gaining an understanding of the challenges it faces in the rapidly evolving information technology arena, including the application of computer algorithms in automated decision systems using artificial intelligence. There is currently a bill before the state legislature (Senate Bill 1103), which is based in part on Vermont’s law, and, if enacted, would do much to start Connecticut on a path forward toward understanding these challenges.

As more and more critical decisions are being informed or directed by government computer algorithms – algorithms whose benefits and consequences most government officials and state residents don’t understand – gaining control of these technologies is imperative if government is to serve the public in a responsible way. The keys to gaining control are knowledge and transparency. Both knowledge and transparency are important so that policy-makers and the public understand and have confidence in what government computer algorithms are actually doing, whether they are working efficiently as intended, without consequential errors or bias, and without violating other fundamental human rights.

Senate Bill 1103 is not as strong as Vermont’s new law, but it will start Connecticut on the path to achieve these critical twin goals.

Mitchell W. Pearlman is a former executive director of the Connecticut Freedom of Information Commission and a current member of the Board of Directors of the Connecticut Foundation for Open Government and the Connecticut Council on Freedom of Information.

[1] https://www.nytimes.com/2023/01/31/us/politics/black-americans-irs-tax-audits.html.

[2] “Panel: Connecticut Needs Safeguards for Agency Algorithms,”https://apnews.com/article/technology-politics-connecticut-state-government-civil-rights-177f4f046a55f85ae795b8a09f6e28f4..

[3] “Algorithmic Accountability White Paper,” Algorithmic Accountability White Paper – Yale Law School.

[4] https://legislature.vermont.gov/Documents/2022/Docs/ACTS/ACT132/ACT132%20As%20Enacted.pdf.

[5] For example an algorithm is defined as “a computerized procedure consisting of a set of steps used to accomplish a determined task” and an automated decision system is defined as “any algorithm, including one incorporating machine learning or other artificial intelligence techniques, that uses data-based analytics to make or support government decisions, judgments, or conclusions.”

 [6] In this regard, it would be beneficial if Connecticut substituted the Office of the Comptroller for the Department of Administrative Services to help reduce any appearance of a conflict of interest since the former is independent of the gubernatorial administration which is responsible for most Executive Branch computer operations. The Office of the Comptroller is also a better choice because of its leadership in providing public access to government information.