Written by: Jose Molinelli, Legislative Intern
Artificial intelligence (AI) has become a topic of growing interest in both government and business. Its myriad of benefits and risks pose a dilemma in striking a balance between incentivizing its development and mitigating its potential impacts on civil rights, national security, and democracy. This article summarizes the history and significance of AI, and how both state, federal, and foreign governments are working to address this emerging technology.
AI Throughout History: The concept of AI dates back to Greek mythology, and has appeared since in famous works such as Mary Shelley’s “Frankenstein” and in Fritz Lang’s 1927 film Metropolis which hinted at the ethical debates policymakers and industry are grappling with today. The first modern AI program debuted in 1950 with Christopher Strachey’s Ferranti Mark I computer, and since then, AI has continued to evolve into the tool most have only begun to understand.
Defining AI: Artificial Intelligence is a computerized digital program with the ability to learn, reason, generalize, and infer meaning. It does so by means of data perception, synthesis, and inference, allowing it to perform a series of tasks such as optical recognition, problem solving, and data collection. These tasks can be performed within internet search engines, smart phones, and vehicles. Although AI has aimed to simulate the human brain to model problem solving, it is still a highly mathematical process relying on machine learning to facilitate user-device interfaces, digital transactions, and data synthesis, amongst other uses.
AI’s Political, Economic, and Legal Impacts: AI’s impact on daily life is immense. As a governance tool, AI can be used to generate and propagate misinformation, or even to surveil citizenry. This is relevant even at a day-to-day level given the emergence of AI scams and AI cyber-attacks which can threaten national security. In the private sector, businesses have benefitted from using AI to make quicker, more informed investment decisions and other analyses, albeit there are fears that such use could displace a large portion of the workforce when adopted at scale. AI also poses several legal quandaries, particularly when assessing the use of AI-generated forms and evidence for fear of algorithmic inaccuracies and biases that have worried civil rights advocates, including in law enforcement surveillance.
Different Approaches to Regulating AI: Both the private sector and governments around the world are using AI in a variety of applications, leading many to call for policymakers to regulate the emerging technology to harness its expansion and mitigate its potential threats. In general, existing regulations around AI have sought not to hinder its development, but rather to tie that development to new and existing accountability and reliability standards around social responsibility.
Given that the AI depends greatly on data, some nations have taken the approach of instituting consumer data rights by regulating certain programmer and provider practices. The European Union currently regulates data privacy through its General Data Protection Regulation and E-Privacy Directive which articulate the standards by which developers and service providers can store and use consumer data. The Digital Services Act also forces contracting or development companies to maintain transparency regarding their use of AI by regularly reporting to EU regulators. China, which has exhibited extensive growth in AI, has also enacted a set of AI transparency rights through its Code of Ethics for New Generation AI and the Personal Information Protection Law.
In comparison, the U.S. lacks a singular national privacy law, resulting in a patchwork of federal guidelines surrounding the technology’s use. The FTC has issued guidelines addressing deceptive or unfair practices using AI, while the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission issued a joint statement on AI’s impact on civil rights, fair competition, consumer protection, and equal opportunity. The National Institute of Standards and Technology also released proposed regulatory framework called the AI risk management Framework, which seeks to inform businesses on how to mitigate AI’s potential risks.
Meanwhile in both April and June of 2023, President Biden met with the CEOs of AI companies such as ChatGPT, Open AI, Microsoft and Alphabet to discuss the technology’s impacts. The Biden Administration has also stated that they will support forthcoming public assessments of generative AI systems to educate the American public about how such systems will align with the Administration’s proposed protections against AI’s harms.
In Congress, lawmakers have held a series of both private and public meetings to discuss the state of AI, maintaining U.S. leadership in AI’s development, and AI’s impact to national security. The conversations so far have indicated bipartisan consensus on finding a middle ground in crafting a regulatory framework.
This week alone, Senators will attend a classified briefing to examine how both the U.S. and its adversaries are implementing AI. It will include testimony from Deputy Secretary of Defense Kathleen Hicks, Director of National Intelligence Avril Haines, White House Office of Science and Technology Policy Director Arati Prabhakar, and National Geospatial Intelligence Agency Director Trey Whitworth. Secretary Hicks is expected to speak on the U.S.’ current defense uses of AI, and how China hasn’t made commitments to ensure human accountability in AI decisions.
Meanwhile, Senate Majority Leader Chuck Schumer (D-NY) has recently stated that a proposed framework entitled the Safe Innovation Framework for AI policy would be released in the coming weeks to regulate the emerging technology. It aims to prioritize innovation while protecting competition, civil rights, and democracy. Sen. Schumer emphasized that both lawmakers, regulators, and the public must become better informed about AI for there to be effective legislation in this area. There are also two proposed bills regarding AI that are being considered in the Senate, with one bill focusing on fostering transparency in the federal government’s use of AI, and the other establishing a federal office examining how the U.S. can be competitive in AI while protecting Americans’ civil liberties.
At the state level, there is a growing interest in regulating AI, with particular attention given to the technology’s use in policing, consumer rights, hiring procedures, and automated decision making in various industries. Vermont passed legislation to create an Artificial Intelligence Commission, which in turn inspired bills currently being considered by the Texas, California, Washington, and Connecticut state legislatures. Both New York and Illinois passed legislation regulating the use of AI as a hiring or promotion tool for employers. In Colorado, California, and DC, legislation has been considered to regulate insurance companies through the application of impact assessments, risk management frameworks, and algorithmic eligibility standards.
Applying AI Going Forward: The debate over AI’s development has largely centered on ethical responsibility as a means of mitigating potential risks to national security and democracy overall. Concerns over user privacy and civil liberties are likely to be at the core of future policy proposals. It appears some level of human involvement will be required to avoid the potential social and legal harms of AI, particularly in the areas of law enforcement and private sector use to avoid discrimination or other adverse effects. Finally, fostering transparency in the development and use of AI will be key in making the public better informed about how AI impacts daily life. It will also enable elected officials to better tailor new policies to effectively regulate AI as a tool for good, rather than an instrument for harm.