Pages

Saturday, April 22, 2023

AI at Berkeley's Law School

You would expect law schools in particular to create rules that folks could follow. But it's not clear (to yours truly, at least) exactly what the rules at Berkeley's law school permit. We have provided a footnote set of comments in bold face. From Reuters:

University of California Berkeley law school rolls out AI policy ahead of final exams

By Karen Sloan, 4-20-23

===

Summary

  • School allows some uses of generative AI, bars others
  • Policy is meant to provide clarity around rapidly evolving technology

===

The University of California, Berkeley School of Law is among the first law schools to adopt a formal policy on student use of generative artificial intelligence such as ChatGPT. The policy, rolled out April 14, allows students to use AI technology to conduct research* or correct grammar.** But it may not be used on exams or to compose any submitted assignments. And it can’t be employed in any way that constitutes plagiarism, which Berkeley defines as repackaging the ideas of others.***

The policy means that law students would be in violation of the school’s honor code if they used ChatGPT or a similar program to draft their classwork and merely reworded the text before turning it in, said professor Chris Hoofnagle, who worked with two other faculty members to develop the policy over the past month.****

The new policy is a default. Individual professors may deviate from the rules if they provide written notice to students in advance. “The approach of finals made us realize that we had to say something,” Hoofnagle said. “We want to make sure we have clear guidelines so that students don’t inadvertently attract an honor code violation.”

The November introduction of ChatGPT and subsequent large language models that generate sophisticated, human-like responses based on requests from users and mountains of data, has prompted significant handwringing among educators who fear students will use the programs to cheat. But others say generative AI has the potential to improve student learning when used appropriately.

Hoofnagle said he was not aware of any other law schools that have yet adopted formal policies regarding generative artificial intelligence. The Association of American Law Schools is not tracking such policies, a spokesman said Thursday. Representatives for the law schools at Yale, Stanford and Harvard did not immediately respond to requests on whether they have or are developing policies on generative AI. Berkeley sought to find a balance that allowed some but not all uses, Hoofnagle said, noting that eventually major legal search tools such as Westlaw and Lexis will incorporate that technology. “The reality is that generative artificial intelligence is going to be in everything, so it will be impossible to tell students they can’t use it,” he said. 

Source: https://www.reuters.com/legal/transactional/u-california-berkeley-law-school-rolls-out-ai-policy-ahead-final-exams-2023-04-20/.

===

*Before AI became a big topic, any search for information in Google or other similar search engines involved a degree of AI. Not clear how this provision changes anything.

**Word processors such as Word have for many years had grammar and spelling correction options built in. Not clear how this provision changes anything.

***If an AI program writes something, just who are the "others" whose work is being repackaged?

****While this provision is clear enough, the question is whether it can be enforced. As we have noted in prior posts, there are no current programs that can reliably detect AI. Moreover, lawyers are supposed to represent their clients as best they can. If an AI program can draft a persuasive brief for a client, why wouldn't a lawyer want to use it. The main problem at this point is that such programs may include misinformation when they write something. Thus, ultimately a lawyer who used AI would have a professional duty to check the validity of what such a draft contained. In effect: "draft, but verify."

===

Bottom line: We are just at the beginning of coming up with sensible rules for AI in higher ed. And we're not there yet.

No comments: