Synthetic Intelligence & Robotics
Newest ‘Bluebook’ has ‘bonkers’ rule on citing to synthetic intelligence
If you’re not sure about how and when to quote content material generated by synthetic intelligence, a brand new quotation rule is unlikely to clear up the confusion, in keeping with specialists who spoke with LawSites. (Picture by Howchou, PD ineligible (books), by way of Wikimedia Commons)
Up to date: If you’re not sure about how and when to quote content material generated by synthetic intelligence, a brand new quotation rule is unlikely to clear up the confusion, in keeping with specialists who spoke with LawSites.
The twenty second version of The Bluebook: A Uniform System of Quotation, launched in Might, features a new Rule 18.3 for citing output from generative AI. Critics argue that the brand new rule “is basically flawed in each conception and execution,” LawSites stories.
Critics embody Susan Tanner, a professor on the College of Louisville’s Louis D. Brandeis Faculty of Regulation, who referred to as the brand new rule “bonkers” in a publish at Medium.
The rule requires that authors citing output from generative AI, reminiscent of ChatGPT conversations or Google search outcomes, save a screenshot of that output as a PDF. The rule has three sections—for big language fashions, search outcomes and AI-generated content material—and has barely differing quotation guidelines for every.
One downside, Tanner stated, is that the rule treats AI as a citable authority, slightly than a analysis software.
“What would a smart method to AI quotation appear like?” Tanner wrote. “First, acknowledge that in 99% of instances, we shouldn’t be citing AI in any respect. We should always cite the verified sources AI helped us discover.”
Within the uncommon case through which an AI output needs to be cited, the creator ought to keep in mind that the quotation is documenting what was stated by generative AI, not the reality of what was stated, Tanner stated. She offers this instance: “OpenAI, ChatGPT-4, ‘Clarify the rumour rule in Kentucky’ (Oct. 30, 2024) (conversational artifact on file with creator) (not cited for accuracy of content material).”
Jessica R. Gunder, an assistant professor of legislation on the College of Idaho School of Regulation, supplied one other instance of an applicable quotation to generative AI in her critique of Rule 18.3 posted to SSRN.
“If an creator needed to spotlight the unreliability of a generative AI software by pointing to the truth that the software crafted a pizza recipe that included glue as an ingredient to maintain the cheese from falling off the slice, a quotation—and preservation of the generative AI output—could be applicable,” she wrote.
Cullen O’Keefe, the director of analysis on the Institute for Regulation & AI, sees one other downside. The rule differentiates between massive language fashions and “AI-generated content material,” however content material generated by massive language fashions is a kind of AI-generated content material.
In an article on the Substack weblog Jural Networks, he steered that one interpretation of the rule governing AI-generated content material is that it applies to issues reminiscent of photographs, audio recordings and sound.
He additionally sees inconsistencies about whether or not to make use of firm names together with mannequin names and when to require the date of the era and the immediate used.
“I don’t imply to be too harsh on the editors, whom I commend for tackling this situation head-on,” O’Keefe wrote. “However this rule lacks the standard precision for which The Bluebook is (in)well-known.”
Up to date Sept. 25 at 2:34 p.m. to precisely cite Cullen O’Keefe’s level about massive language fashions. Up to date on Sept. 27 at 8 a.m. to appropriate Gunder’s title.
Write a letter to the editor, share a narrative tip or replace, or report an error.