Techno Blender
Digitally Yours.

Is ChatGPT more polite than you?

0 60


Getty Images/picture alliance

ChatGPT’s ability to compose a paper nearly flawlessly within seconds has raised concerns about the future of education: cheating has never been easier. These types of issues with the AI chatbot have created a demand for AI-generated text detectors. However, a new study shows that there are some key characteristics that can help distinguish text created by ChatGPT from text written by humans.

For the study, researchers built an ML model that looked for patterns and characteristics in ChatGPT responses that could help distinguish them from human generated text. Two experiments were conducted by having ChatGPT generate restaurant reviews and also prompting it to rephrase the original human-generated text reviews, according to the study

The observations showed that ChatGPT describes experiences rather than sharing its feelings, avoids personal pronouns, uses some unusual words and, interestingly, never uses aggressive or rude language. For example, it used the atypical word “inattentive” in its reviews, as well overly positive language such as “absolutely delicious” and “overly polite”. 

SEE: How to get started using ChatGPT

The study’s results indicate that ChatGPT’s writing style is extremely polite, and unlike humans, it cannot produce responses that include metaphors, irony or sarcasm. Does that mean that the true differentiator between artificial and human intelligence is our rudeness? 

“It is extremely polite, aiming to please different types of requests from various domains fairly well mimicking humans, but that still does not have the profoundness of human language (e.g. irony, metaphors,…),” the study said. 

Other indicators of ChatGPT written text included lack of detail.  For example, in the reviews it used general information it knew about restaurants instead of describing restaurant specifics a human who dined in a restaurant would include. They study also found that ChatGPT repeated itself a lot, constantly using the word “restaurant” within its text. 

If you are attempting to distinguish between human or ChatGPT written text, looking out for these observations may be your best bet. AI generated text detection tools exist; however, none of them perform nearly as accurately as needed. 

On Wednesday, Open AI, the research company behind ChatGPT, released a free ChatGPT text identifying tool; however, the tool can not be relied on. OpenAI’s “classifier” tool can only correctly identify 26% of AI-written text with a “likely AI-written” designation and provides false positives 9% of the time. 

ZDNET tested other AI text detecting tools, including GPT-2 Output Detector, Writer AI Content Detector, and Content at Scale AI Content Detection. ZDNET’s results showed that these tools were unreliable as well. 

If you are a student trying to get away with using ChatGPT, it might be worth trying to make your original work nicer or going in and making the chatbot’s output ruder (for the record, ZDNET isn’t aiding or abetting cheating). 


A person typing on a laptop that has ChatGPT on the screen

Getty Images/picture alliance

ChatGPT’s ability to compose a paper nearly flawlessly within seconds has raised concerns about the future of education: cheating has never been easier. These types of issues with the AI chatbot have created a demand for AI-generated text detectors. However, a new study shows that there are some key characteristics that can help distinguish text created by ChatGPT from text written by humans.

For the study, researchers built an ML model that looked for patterns and characteristics in ChatGPT responses that could help distinguish them from human generated text. Two experiments were conducted by having ChatGPT generate restaurant reviews and also prompting it to rephrase the original human-generated text reviews, according to the study

The observations showed that ChatGPT describes experiences rather than sharing its feelings, avoids personal pronouns, uses some unusual words and, interestingly, never uses aggressive or rude language. For example, it used the atypical word “inattentive” in its reviews, as well overly positive language such as “absolutely delicious” and “overly polite”. 

SEE: How to get started using ChatGPT

The study’s results indicate that ChatGPT’s writing style is extremely polite, and unlike humans, it cannot produce responses that include metaphors, irony or sarcasm. Does that mean that the true differentiator between artificial and human intelligence is our rudeness? 

“It is extremely polite, aiming to please different types of requests from various domains fairly well mimicking humans, but that still does not have the profoundness of human language (e.g. irony, metaphors,…),” the study said. 

Other indicators of ChatGPT written text included lack of detail.  For example, in the reviews it used general information it knew about restaurants instead of describing restaurant specifics a human who dined in a restaurant would include. They study also found that ChatGPT repeated itself a lot, constantly using the word “restaurant” within its text. 

If you are attempting to distinguish between human or ChatGPT written text, looking out for these observations may be your best bet. AI generated text detection tools exist; however, none of them perform nearly as accurately as needed. 

On Wednesday, Open AI, the research company behind ChatGPT, released a free ChatGPT text identifying tool; however, the tool can not be relied on. OpenAI’s “classifier” tool can only correctly identify 26% of AI-written text with a “likely AI-written” designation and provides false positives 9% of the time. 

ZDNET tested other AI text detecting tools, including GPT-2 Output Detector, Writer AI Content Detector, and Content at Scale AI Content Detection. ZDNET’s results showed that these tools were unreliable as well. 

If you are a student trying to get away with using ChatGPT, it might be worth trying to make your original work nicer or going in and making the chatbot’s output ruder (for the record, ZDNET isn’t aiding or abetting cheating). 

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment