ChatGPT maker wants to end its reputation as a huge cheating machine

The new AI Text Classifier was released today by OpenAI after discussions at schools and universities

The maker of ChatGPT is looking to reduce its reputation as a freewheeling cheating machine, with a new tool that allows teachers to detect whether a text was written by a student or by artificial intelligence.

The new AI Text Classifier was released today by OpenAI after weeks of discussions in schools and universities about the ability of ChatGPT to write about anything on demand and thus could fuel academic dishonesty and undermine academics. learnings.

OpenAI has already warned that its new instrument, like others already available, is not error-proof. The method for detecting texts written with artificial intelligence “is imperfect and can make mistakes”, warned Jan Leike, head of the OpenAI team in charge of securing its systems.

“Therefore, one should not be dependent only on him when making decisions”, warned Leike.

Teenagers and college students are among the millions of people who have started trying out ChatGPT after it launched on Nov. 30 as a free application on the OpenAI website. While some have found a way to use it creatively and without causing harm, the ease with which it answers homework questions and helps with other tasks has sown panic among some educators.

As schools opened for the new school year, major public school districts such as New York and Los Angeles began blocking its use in classrooms and school equipment.

The Seattle public school district blocked ChatGPT on all school equipment, but later allowed access to teachers who wanted to use it as a teaching tool, said Tim Robinson, the district spokesman.

“We cannot afford to ignore it,” Robinson said.

The school district is discussing expanding the use of ChatGPT in classrooms to allow teachers to use it to train students to be better critical thinkers and students to use it as a 'personal tutor' or to help generate ideas when they're doing school work, Robinson said.

School districts across the US say they are seeing ChatGPT conversations evolve rapidly.

“The initial reaction was 'Oh my God! How are we going to stop the avalanche of cheating that's going to happen with ChatGPT?” said Devin Page, a technology expert with the Public School District in Calvert County, Maryland. But there is now a growing understanding that "this is the future" and that blocking it is not the solution, he said.

“I think it would be naive if we weren't aware of the dangers this instrument poses, but we would also fail our students if we banned them from using it, given all its potential power,” said Page, who admits that districts like his can come to unlock ChatGPT, especially when authorship detection tool is available.

OpenAI emphasized the limitations of its detection tool in a message posted today on its blog, but added that, in addition to detecting plagiarism, it could also help automatic campaigns of opinion intoxication and other misuses of artificial intelligence to mimic humans.

The longer the excerpt, the greater the tool's ability to detect authorship, whether human or artificial intelligence. You write any text – a university admission essay or a literary analysis – and the tool will classify it as “very unlikely, unlikely, it is not clear if it is, possible or likely” that it is generated by artificial intelligence.

But, much like ChatGPT itself, which, despite having been trained with a very large amount of books, newspapers and texts available online, makes lies or nonsense available, it is not easy to interpret how it obtained a result.

"Fundamentally, we don't know what pattern it pays attention to or how it works internally," Leike acknowledged. "There's not much we can say at this point about how the classifier works."

Higher education institutions in several countries have also begun to debate the responsible use of artificial intelligence technology. Sciences Po, one of France's most prestigious universities banned its use last week and warned that anyone caught using it in written or oral work could be banned from Sciences Po and other institutions.

In response to the challenge, OpenAI stated that it has been working for several weeks on defining guidelines to help educators.

"As with many other technologies, it may be that a district decides that it is inappropriate to use it in classrooms," said OpenAI policy researcher Lama Ahmad. “We don't push in either direction. We just want to provide the necessary information to make the decisions they consider right”.

This is a rare public spotlight for a 'start-up' [technology company in the early days of its existence], in this case based in San Francisco and focused on research, which is now supported by billions of dollars from its partner Microsoft and facing growing interest from the public and governments.

The French Minister of Digital Economy, Jean-Noël Barrot, was recently meeting in California with OpenAI executives, including President Sam Altman, and a week later he said in Davos, at the World Economic Forum, that he was optimistic about this technology.

But Barrot, a former professor at the Massachusetts Institute of Technology (MIT, for its acronym in English) and at the Parisian business school HEC, also highlighted the existence of difficult ethical issues that have to be faced.

“If you're in a law school, there's room for concern because obviously ChatGPT, among other tools, is capable of solving exams impressively. But if you are in an economics college, then ChatGPT will have a hard time, to look for or deliver something that is expected when you attend an economics college”, he compared.

Barrot added that it will be increasingly important for users to understand the basics of how these systems work, so that they understand what biases can exist.

 



Comments

Ads