By Sanobar Valiani, 2L
Artificial intelligence (“AI”) aims to replicate human intelligence so precisely that a machine can be made to stimulate it. Since 2010, AI has made its way to the center of investment and political focus, promising “to give everyone incredible new capabilities” in sectors, including healthcare, retail, finance, transport, manufacturing, and government services. AI is leveraged to deepen our ability to automate, detect, personalize, predict, and understand across these sectors.
OpenAI, a leader in AI development, originally aimed “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” However, in 2018 the company shifted away from this when it started seeking capital resources for direction. Since then, OpenAI has created a culture of secrecy and deception, fueling what is known as the “AI hype cycle.”
On March 22, 2023, the Future of Life Institute published an open letter (“the Open Letter”), signed by Elon Musk and Steve Wozniak, urging AI labs to cease training models more powerful than GPT-4, the latest version of OpenAI’s large language model software. They caution against the development of AI that will automate jobs, outnumber, outsmart, obsolete, and replace us, and lead to the loss of control over civilization. It warns that AI systems pose profound risks to society and humanity. The Center for AI and Digital Policy’s (“CAIDP”) president also signed the Open Letter that calls for a six-month pause on training AI systems that are more powerful than GPT-4.
On March 30, 2023, CAIDP submitted a complaint against OpenAI, alleging violations of Section 5 of the Federal Trade Commission (“FTC”) Act, which prohibits unfair and deceptive business practices. It calls OpenAI’s GPT-4 “biased, deceptive, and a risk to privacy and public safety.” CAIDP wants the FTC to require OpenAI to establish independent assessments of GPT products before they are made available to the public and to create a public incident reporting system like its consumer fraud reporting systems for GPT-4. Generally, CAIDP hopes the FTC will create standards for all generative AI products.
Latin American countries have begun to rely on AI’s solutionism appeal to boost their economy, projecting a 5.4% gain of GDP, equal to US$0.5trn, by 2030. However, implementation of AI related policies has been difficult or have discontinued altogether because of the region’s political instability. The Inter-American Development Bank emphasizes that continuity in governments is going to be critical in implementing AI in transparent, ethical, and safe manner. This is particularly important in the face of the complaint against OpenAI. OpenAI is just one of many companies that is capable of turning to deceptive, untransparent practices that pose a risk to privacy and public safety to fuel the AI hype cycle. Latin America’s AI initiatives are already outpacing the creation of AI policies and regulations. It is imperative that governments address this gap because the dangers of developing AI with human-competitive intelligences absent regulation will go far beyond the dangers outlined in the Open Letter.