• News

Why Is It Challenging to Regulate Government-Employed Algorithms

By 
Whale
Why Is It Challenging to Regulate Government-Employed Algorithms
why is it hard to regulate automated decision systems
New to CoinSmart Avalanche and Polygon

Today, many government agencies employ automated decision systems to perform complex tasks, such as detecting unemployment fraud, informing criminal sentencing, distributing health benefits, and prioritizing child abuse cases. However, the people affected by these automated decision systems know very little about how the government-employed algorithms decide their fate, which breeds mistrust.

Moreover, the rare insights they received into the performance of these algorithms were far from comforting. In many states, the algorithms used to determine how much home health aid the residents will receive have slashed the benefits for thousands. In Michigan, the algorithm designed to flag fraudulent unemployment claims wrongly flagged thousands of applicants, causing them to lose their houses and file for bankruptcy. Moreover, the software PredPol used by the US police to predict the geography of future crimes biasedly sends the police to Hispanic and Black neighborhoods.

Given these insights, steps have been taken to question the authority of these automated decision systems and the role they play in determining the fate of millions of US citizens. However, despite numerous legislative efforts, it’s not been easy to standardize these algorithms.

Legislative Efforts to Regulate ADS

In 2018, the New York City Council formed a task force to study the city’s utilization of algorithms. The city’s legislation was the first in the country that attempted to shed light on how government agencies were using AI to make pertinent decisions about policies and people. The task force formation was heralded as a watershed moment meant to usher in the era of oversight.  

In the following years, the insights about the harms of algorithm use have prompted lawmakers across the board to introduce about 40 bills aiming to study and regulate government-employed algorithms. These bills propose to create study groups, audit algorithms for bias, and more. However, these reforms have largely died instantly upon introduction or have expired in committees are brief hearings.

Bills to create study groups to monitor the use of algorithms failed in California, New York, Massachusetts, Hawaii, and Virginia. Bills that required algorithm audits died in Maryland, California, Washington, and New Jersey. In multiple states, ADS study bills are still pending. The sole state bill to pass has been Vermont’s, which created a task force whose recommendations to adopt regulations and form a permanent AI commission have been ignored.

So, the question that arises is, “Why is it hard to regulate automated decision systems?” Let’s explore in detail!

4 Reasons Why It’s Hard to Regulate Automated Decision Systems

Here are some reasons why lawmakers are unable to regulate automated decisions systems used by government agencies:

Lawmakers Have Limited Knowledge of Algorithms in Use

Lawmakers have limited to no knowledge of the government-employed algorithms as state agencies rebuff their attempts to gather basic information about them. It keeps them from knowing what the tools are and what they do. After the creation of its task force, NYC conducted a survey of city agencies that listed only sixteen ADS across nine agencies, which was suspected to be a gross underestimation. Moreover, since they don’t know how the systems are used, it’s hard to form regulations for them.

Vermont’s ASD study group reported that despite having examples of where the local governments and states have used AI applications, they are unable to identify the applications and their use. The Hawaii Senate also failed with a nonbinding resolution to convene a task force and then failed to pass a binding one.

Advocacy groups and legislators who authored ADS bills in Maryland, Michigan, California, Massachusetts, Washington, and New York have no clear understanding of the extent to which their respective state agencies use automated decision systems. Advocacy groups who survey government agencies about their use of ADS claim they always receive inadequate information and non-responses.

Corporate Influence Is a Hurdle for Lawmakers

Washington is planning to introduce an updated version of its rejected bill. Had the bill been approved, it would have required state agencies seeking automated decision systems to produce an accountability report that discloses the name and purpose of the system, the data it would use, and whether it has been tested for biases or not. The bill would have banned the use of discriminatory ASD tools and required people affected by the said algorithm to appeal the decision.

The lawmakers insist that corporate influence is a huge hurdle in the passing of such bills. Since Washington is a high-tech state, high-tech corporate enterprises influence governmental processes and are the root of the pushback for the bill.

California’s similar bill is still pending in committee. It encourages vendors looking to sell ASD tools to government agencies to provide an ADS impact report and their bids. It also requires the state’s Department of Technology to post the impact reports for active algorithms on its website. This bill was opposed by 26 industry groups, including corporate tech representatives and organizations representing medical device makers, insurance companies, and banks.

Originally, the plan was to regulate ADS use in the private and public sectors. However, even after it was confined to the public sector, the argument became that it would cost California taxpayers millions of dollars. The opposing private parties signed a letter that stated the bill would discourage participation in the procurement process since it encouraged vendors to complete an impact assessment for their algorithms, which would be too burdensome.

Overly Broad Definition of ADS

Another point of contention is that most of these bills have overly road definitions of automated decision systems. However, these definitions mirror those used by the proposed regulations in the EU, Canada, etc.

James McMahan, policy director for the Washington Association of Sheriffs and Police Chiefs, stated that the Washington bill would apply to most state crime lab operations, including fingerprint, DNA, and firearm analysis. Vicki Christophersen, Internet Association lobbyist, testified at the hearing of the boll and said that the bill would prohibit the use of red-light cameras.

Maryland’s bill would have also required agencies to produce reports detailing ADS tools’ basic function and purpose. It would have also prohibited the use of discriminatory tools. Delegate Terri Hill, the sponsor of the Maryland bill, states that they aren’t telling people not to use ASD. The bill only encourages them to identify the biases upfront ad ascertain if they are consistent with the state’s overarching goals.

An industry group called Maryland Tech Council opposed the bill and argued that the prohibitions against discrimination were premature and would have hurt innovation. The council believed that the ability to evaluate bias in an emerging area is jumping the gun and prevents companies from trying to develop new technologies out of fear of being labeled as discriminatory.

Limited Success in the Private Sector

The state and local legislature has made fewer attempts to regulate private use of ADS, such as in the car insurance industries and tenant screening. However, in recent years, these measures have been marginally successful.

The NYC Council passed a bill that requires private enterprises to conduct bias audits of algorithmic hiring tools before using them. Employees use the tools to screen job candidates without using a human interviewer. This bill will not take effect until 2023 and has already been deemed too weak by some of its early supporters.

A state law was enacted by Illinois in 2019, requiring private employers to notify job candidates about their evaluation by algorithmic hiring tools. In 2021. The legislature altered the law requiring employers who utilize these tools to report demographic information about job candidates to a state agency so it can be analyzed for evidence of biased decisions. The Colorado legislature also passed a law this year to create a framework for evaluating insurance underwriting ADS and ban the utilization of biased algorithms in the industry. This bill will take effect in 2023.  

Despite the limited success legislature has seen in the private sector, it has a long way to go in regulating the use of automated decision systems in both the private and public sectors.

This post may contain affiliate links, meaning I get a commission if you decide to make a purchase through my links at no cost to you. Please read my disclosure for more info. Clicking any of the links on this website does not increase the cost or affect the price for any item you purchased. Our main purpose is for informational purpose and not for just earning 🙏 

Things to avoid while trading crypto

TOP 50 COMMON MISTAKES
NEWBIES MAKE THAT CAN BE AVOIDED!

Thank you! Please check your inbox 📩
Oops! Something went wrong while submitting the form.

Latest Articles

All Articles
A Beginner’s Guide to Initial Coin Offerings (ICOs)

A Beginner’s Guide to Initial Coin Offerings (ICOs)

Learn the basics of initial coin offering and getting started with crypto staking.

ICO's
How Important is Cryptocurrency in Today’s World

How Important is Cryptocurrency in Today’s World

Explore Urban Crypto to discover the significance of cryptocurrency in today’s world and benefits of blockchain for business.

Blog
5 Common Cryptocurrency Scams in 2022

5 Common Cryptocurrency Scams in 2022

If you’re new to cryptocurrency, wallets, and exchanges, you need to check out these common crypto scams to keep your cryptocurrency safe.

Blog