On July 12, 2023, the U.S. Senate Committee on the Judiciary’s Subcommittee on Intellectual Property held its second hearing on artificial intelligence (AI) and intellectual property (IP). The first hearing, held on June 7, 2023, focused on AI’s implications for patent law, while this second hearing centered on copyright issues. Hearing participants, both senators and witnesses, seemed to recognize that the industry needs to strike a balance between promoting AI innovation and protecting creators’ rights, particularly in the context of training generative AI models using data gathered from the Internet. While opinions on the means to achieve that end certainly differed across witnesses—who ranged from the head of public policy at Stability AI to a concept artist—the general consensus was that more clarity on the federal level regarding how to protect creators’ rights would be beneficial to stakeholders across the board.
Front and center was the thorny issue of training generative AI models. Many generative AI models are trained on massive amounts of data, often scraped from the Internet—data that can include expressive works created by individuals—without obtaining consent from rightsholders.
Accordingly, the senators focused their questioning on mainly five topics, providing a glimpse of what potential federal legislation could look like:
- Need for New Federal Legislation?: Committee members explored whether there is a need for federal legislation to provide protections and legal remedies for creators to enforce their rights in their works used in generative AI training. They probed whether establishing a federal right of publicity would help achieve this end. The senators asked why right of publicity laws at the state level were insufficient and what federal legislation should mandate in order to give creators the tools to protect their rights. Among the witnesses’ suggestions was to create a federal anti-impersonation law in order to create a federal right of action against those who intentionally impersonate a creator.
- Licensing for Training, and Questions of Opt-Out and Opt-In Policies: Senators questioned how realistic opting out of the use of data in a training set is for creators and how, technically, models incorporate opt-outs. Jeffrey Harleston, general counsel and executive vice president of business and legal affairs at Universal Music Group, advocated for the creation of a digital marketplace—akin to the current licensing marketplace for music—for creators to opt in and license their work to AI companies for use in training sets.
- Fair Use: The lawmakers and witnesses considered fair use’s role in determining the legality of using creators’ works in training sets and the legality of subsequent outputs. Sen. Marsha Blackburn (R-Tenn.), whose constituents include the heavily invested songwriting community in Nashville, pressed the witnesses on whether the fair use test should focus on whether or not the generative AI output is a commercial replacement for the copyrighted work.
- Comparative International Standards: Participants also discussed what other Western democracies are doing to balance innovation in AI with creators’ rights. Matthew Sag, professor of law, AI, machine learning and data science at Emory University School of Law, and Ben Brooks, head of public policy at Stability AI, both pointed to the European Union’s copyright laws, which require companies to accept opt-outs when data mining is used for commercial purposes and require rightsholders to indicate such opt-outs by machine-readable means, such as metadata tagging.
- Consumer Disclosures: Finally, discussion also centered on what disclosures or indicators should be used to inform consumers that content is created by AI. Senators questioned the witnesses on the best way to inform consumers that content is generated by AI and expressed the importance of such disclosures in order for consumers to distinguish AI-created content from human-generated content.
While the senators certainly seemed eager in their questions to understand the AI landscape and what Congress can do to protect IP and creators’ rights, how serious Congress is about enacting federal legislation to protect the creative industry and regulate generative AI remains to be seen. It also remains an open question how these new proposals might interact with existing enforcement tools under copyright law, the Lanham Act, and state laws. And the timelines for introduction of potential new federal legislation remain unclear. However, stakeholders have begun closely monitoring these issues and would do well to prepare for potential changes in the law by taking appropriate prophylactic measures to minimize risks.