Pages

Friday, February 3, 2023

AI Checkers (Classifiers) Aren't There Yet

DALL-E: Create a picture of a female professor trying to determine
whether a student or an artificial intelligence program
wrote an essay in renaissance painting style.
(Four choices were generated by DALL-E.)

There is lots of fretting in academia about students using AI to write essays and term papers. Below are some developments.

From The Verge: OpenAI, the company behind DALL-E and ChatGPT, has released a free tool that it says is meant to “distinguish between text written by a human and text written by AIs.” It warns the classifier is “not fully reliable” in a press release and “should not be used as a primary decision-making tool.” According to OpenAI, it can be useful in trying to determine whether someone is trying to pass off generated text as something that was written by a person.

The tool, known as a classifier, is relatively simple, though you will have to have a free OpenAI account to use it. You just paste text into a box, click a button, and it’ll tell you whether it thinks the text is very unlikely, unlikely, unclear if it is, possibly, or likely AI-generated...

OpenAI also says in its tests the tool labeled AI-written text as “likely AI-written” 26 percent of the time and gave false AI detections 9 percent of the time...

Full story at https://www.theverge.com/2023/1/31/23579942/chatgpt-ai-text-detection-openai-classifier.

====

From the above-mentioned press release:

...Limitations

Our classifier has a number of important limitations. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.

The classifier is very unreliable on short texts (below 1,000 characters). Even longer texts are sometimes incorrectly labeled by the classifier.

Sometimes human-written text will be incorrectly but confidently labeled as AI-written by our classifier.

We recommend using the classifier only for English text. It performs significantly worse in other languages and it is unreliable on code.

Text that is very predictable cannot be reliably identified. For example, it is impossible to predict whether a list of the first 1,000 prime numbers was written by AI or humans, because the correct answer is always the same.

AI-written text can be edited to evade the classifier. Classifiers like ours can be updated and retrained based on successful attacks, but it is unclear whether detection has an advantage in the long-term.

Classifiers based on neural networks are known to be poorly calibrated outside of their training data. For inputs that are very different from text in our training set, the classifier is sometimes extremely confident in a wrong prediction...

Full news release: https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.

===

In short, you can't rely on the classifier to make an accusation of plagiarism, if that is what you call using AI to write an essay. I suspect at this point that if an instructor assigned a term paper and insisted on having footnotes and references, currently available AI programs would have a problem. But eventually even that might not be adequate protection. Since the "I" in AI is something of a misnomer, it may include false information since it is relying on text on the Internet, and there is a lot of false information on the Internet. An essay with perfect grammar and containing false information would be a pretty good indicator of reliance on AI at this stage. The classifier is as unlikely to recognize false information as is the original AI program that wrote the essay. But presumably the instructor who assigned the essay can make that determination.

No comments:

Post a Comment