Volume 11 is here!
In an era defined by rapid digital progress, the State of Minnesota has taken a noteworthy step to address one of the dark sides of artificial intelligence (AI): the proliferation of malicious deepfakes. With the enactment of a new law (Nonconsensual Dissemination of Deep Fake for Pornography and Election, 2023), effective from August 1, 2023, Minnesota aims to regulate the nonconsensual dissemination of deepfakes, specifically targeting those used for pornography and to influence elections within 90 days of an election.
However, as we scrutinize the intricacies of this legislation, we must recognize both its promise and its limitations, and the broader AI conundrum it unveils. The law's intent is commendable: to hold individuals accountable for the malicious spread of deepfakes intended to harm another person or influence an election without consent. The penalties, a fine of at least $10,000, seek to deter such harmful actions. This law focuses on what we might call "malicious deepfakes," which are those shared without consent and with harmful intent. It doesn't criminalize the use of deepfake technology itself. The challenge lies in the execution of this law. Deepfakes disseminate rapidly through social media, making them challenging to regulate. We should place some responsibility on social media platforms that allow the spread of malicious deepfakes. Collaborating with deepfake software creators to watermark content for recognition and imposing penalties on platforms that enable harmful content dissemination could be a practical solution.
Moreover, the 90-day period leading up to an election, while vital, may fall short in addressing long-term disinformation campaigns. Misinformation can sow the seeds of doubt long before elections, and the damage can become irreversible by the time the 90-day window arrives. Therefore, we must extend our efforts to counter disinformation beyond this time frame, proactively addressing prolonged campaigns aimed at manipulating public opinion.
Furthermore, the legislation's focus on punishing individuals may not effectively address the core issue: the unregulated use of AI technology. AI manufacturers should bear some responsibility for the potential harm their technology can cause. Analogous to regulations in other industries, AI development and usage should be subject to safeguards and accountability. The global enforcement of this law faces significant challenges, particularly when dealing with individuals outside Minnesota or even beyond U.S. borders. Internet users are notoriously challenging to track, and proving malicious intent can strain government prosecution resources. A flood of cases in the courts could result, wasting taxpayer money.
This regulation targeting deepfakes is a vital component of the broader issue of AI ethics and governance. The development and use of AI must adhere to a well-defined code of conduct. The European Union model (EU AI Act, 2023), which categorizes AI based on its risk to human dignity and freedom, could be a guiding framework. China's approach (How Does China’s Approach, 2023), centered on government interests, allows AI that aligns with those interests. The United States' more laissez-faire approach enables private sector innovation while maintaining an agreed-upon code of conduct. The United States has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights (Blueprint for an AI Bill of Rights, 2023) is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by "From Principles to Practice"—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs. Minnesota has an opportunity to take the lead in crafting a human-centered AI regulation that emphasizes human dignity and ensures responsible AI use. This approach is crucial in protecting freedom of speech, democracy, and safeguarding individuals from the harmful impacts of unregulated AI technology.
In conclusion, Minnesota's effort to regulate malicious deepfakes represents a positive step, but it raises broader questions about AI's responsible use and the need for a comprehensive AI code of conduct. The regulation of deepfakes is just one aspect of a larger challenge that we must collectively address to balance freedom with responsibility in the age of artificial intelligence.
References:
Nonconsensual Dissemination of Deep Fake for Pornography and Election (2023). Retrieved from https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0
EU AI Act: First Regulation on Artificial Intelligence (2023). Retrieved from https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
How Does China’s Approach To AI Regulation Differ from the US and EU? (2023). Retrieved from https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/?sh=5ab55321351c
Blueprint for an AI Bill of Rights (2023). Retrieved From https://www.whitehouse.gov/ostp/ai-bill-of-rights