On March 23, 2023, the Federal Trade Commission (FTC) announced proposed updates to its Negative Option Rule, making it easier for consumers to cancel their enrollment in subscriptions and memberships. Research using GPT-4, the new version of the generative artificial intelligence (AI) technology created by OpenAI that powers ChatGPT Plus, other chatbots, search engines and other computer systems, shows that GPT-4 has no record of this significant FTC proposal.
On December 14, 2022, the FTC announced that it is seeking public comment on potential updates to its Guides for the Use of Environmental Claims (Green Guides). The FTC requested comments on whether to expand the Green Guides to include additional guidance on claims concerning climate change, carbon offsets, energy use and efficiency, degradable, organic and sustainability. GPT-4 has no record of this significant FTC proposal.
On October 20, 2022, the FTC announced that it was “exploring a potential rule to combat deceptive or unfair review and endorsement practices, such as using fake reviews, suppressing negative reviews, and paying for positive reviews,” stating, “Deceptive and manipulated reviews and endorsements cheat consumers looking for real feedback on a product or service and undercut honest businesses.” Once again, GPT-4 has no record of this.
The reason for GPT-4’s huge blind spot is that GPT-4 is not up to date. OpenAI states on its website, “GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021).” As The New Yorker reported, “According to GPT-4, abortion is still a constitutional right.” The article is correct. I asked GPT-4 whether abortion is a constitutional right and got the following response: “In the United States, the Supreme Court has recognized that a woman has a constitutional right to choose to have an abortion. The landmark case of Roe v. Wade, decided in 1973, held that a state law prohibiting abortion except to save the life of the mother was unconstitutional….” There is no mention of the fact that the Supreme Court overturned Roe in Dobbs v. Jackson Women’s Health Organization in June 2022.
GPT-4 and other AI tools that have recently been introduced, such as Microsoft’s new Bing search engine and Google’s new chatbot Bard, provide improved tools for attorneys. The legal tech company Casetext recently introduced an AI legal assistant named CoCounsel, which is powered by technology from OpenAI. CoCounsel allows an attorney to ask questions that might be asked of a junior associate. For example, Casetext states that CoCounsel can draft a legal research memo. “Ask a research question, and give as much detail as you like—the facts, jurisdiction, nuances—and in minutes CoCounsel retrieves on-point resources and provides an answer with explanation and supporting sources.”
The Brookings Institution has stated that AI “is poised to fundamentally reshape the practice of law.” Large language-based systems like GPT-4 represent the first time that widely available technology can perform sophisticated writing and research tasks that previously could be performed only by highly trained people, such as attorneys. According to Brookings, “Law firms that effectively leverage emerging AI technologies will be able to offer services at lower cost, higher efficiency, and with higher odds of favorable outcomes in litigation. Law firms that fail to capitalize on the power of AI will be unable to remain cost-competitive, losing clients and undermining their ability to attract and retain talent.”
However, AI has limitations in addition to its knowledge cutoff date, and it can make mistakes. On its website, OpenAI states: “Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.”
Google posts a disclaimer under Bard’s query box warning users that issues may arise: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”
In a Q&A on the Bing website, in response to the question “Are Bing’s AI-generated responses always factual?,” Microsoft states: “Bing aims to base all its responses on reliable sources - but AI can make mistakes, and third party content on the internet may not always be accurate or reliable. Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate. Use your own judgment and double check the facts before making decisions or taking action based on Bing’s responses.”
AI can even be used to create fake images. As reported by The Hill, fabricated images of former President Donald Trump being arrested by the New York Police Department before his actual arrest circulated on social media.
Why It Matters
AI is clearly a useful tool for attorneys and has become more useful with the latest developments. However, attorneys clearly also must use AI carefully.
AI can be a powerful tool for attorneys, but it must be used with care because of several reasons:
Risk of Bias: AI systems are only as good as the data they are trained on, and if the data is biased or incomplete, the AI system can produce biased or incomplete results. Attorneys must be careful to ensure that the data used to train an AI system is unbiased and accurate.
Legal Ethics: Attorneys have a duty to provide competent and ethical representation to their clients. The use of AI must not violate any ethical obligations, such as confidentiality or conflicts of interest.
Interpretation of Results: The results produced by an AI system are not always straightforward and require interpretation by attorneys. Attorneys must have the necessary knowledge and skills to understand and interpret the results correctly.
Liability: Attorneys can be held liable for the use of AI systems that produce incorrect or incomplete results. Therefore, they must exercise caution and ensure that the results produced by an AI system are thoroughly reviewed and verified.
Human Judgment: AI systems are not capable of replacing human judgment, creativity, and intuition. Attorneys must be mindful that the use of AI does not replace or undermine their own expertise, skills, and professional judgment.
Overall, AI can be a valuable tool for attorneys, but it must be used with care and attention to ensure that it does not violate ethical obligations, produce biased or incomplete results, or replace human judgment.
That quote was generated by GPT-4.