About POP!

POP! is INQUIRER.net’s premier pop culture channel, delivering the latest news in the realm of pop culture, internet culture, social issues, and everything fun, weird, and wired. It is also home to POP! Sessions and POP! Hangout,
OG online entertainment programs in the
Philippines (streaming since 2015).

As the go-to destination for all things ‘in the now’, POP! features and curates the best relevant content for its young audience. It is also a strong advocate of fairness and truth in storytelling.

POP! is operated by INQUIRER.net’s award-winning native advertising team, BrandRoom.

Contact Us

Email us at [email protected]

Address

MRP Building, Mola Corner Pasong Tirad Streets, Brgy La Paz, Makati City

Girl in a jacket

AI writing detectors ‘do not work,’ OpenAI verifies

In the previous week, OpenAI published a promotional blog post about tips for educators that shows how some teachers are using ChatGPT as an educational aid, together with suggested prompts to get started. In a related FAQ, they have officially confessed that AI writing detectors don’t work regardless of whether it is being used frequently to punish students with false positives.

In a certain section of the FAQ titled “Do AI detectors work?,” OpenAI responds, “In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.”

The said detectors often give false positives because they rely heavily on unproven detection metrics. Ultimately, there is nothing special about AI-written text that always distinguishes it from human-written, and detectors can be defeated by rephrasing. Last July, OpenAI discontinued its AI Classifier, an experimental tool designed to detect AI-written text. It had a 26 percent accuracy rate.

Another misconception was addressed at the FAQ which was about ChatGPT having the ability to detect whether the text is AI-written or not. OpenAI wrote, “Additionally, ChatGPT has no ‘knowledge’ of what content should be AI-generated. It will sometimes make up responses to questions like ‘did you write this essay?’ or ‘could this have been written by AI?’ These responses are random and have no basis in fact.”

In relation to this, OpenAI also addresses its AI models’ tendency to give false information. The company wrote, “Sometimes, ChatGPT sounds convincing but it might give you incorrect or misleading information (often called a ‘hallucination’ in the literature). It can’t even make up things like quotes or citations, so don’t use it as your only source for research.”

Although AI detectors do not work, it doesn’t mean that humans can never detect AI writing. If a person is familiar with another person’s writing style, it can be noticed immediately just because of the phrasing of the words with it having an AI language model and it is proven that humans can notice the phrase “regenerate response” in a scientific paper which is a label of a button in ChatGPT.

Wharton professor, Ethan Mollick advises avoiding the use of AI detection tools completely because “AI detectors have high false positive rates, and they should not be used as a result.”

 

Other POP! stories that you might like:

Radio Station introduces its first-ever AI DJ based on real-life midday host

Google AI co-founder predicts that everyone will have AI assistants within the next five years

A federal ruling declares no copyright for AI Art

Jimmy Fallon apologizes to his staff via Zoom after the ‘Tonight Show’ exposé

Foreign media dubbed BLACKPINK’s concert in France as the ‘worst concert’ last summer

About Author

Related Stories

Popping on POP!