Techno Blender
Digitally Yours.

OpenAI’s GPT-3 Can Now Give You Philosopher-Level Gyan

0 73



GPT-3

A study of OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher

Research has discovered that OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher. GPT-3 is a neural network machine learning model trained using internet data to generate any type of text. Developed by OpenAI. GPT-3 has been used to create articles, poetry, stories, news reports, and dialogue using just a small amount of input text that can be used to produce large amounts of quality copy. GPT-3 is a powerful autoregressive language model that uses deep learning to produce human-like text.

The research team Eric Schwitzgebel, Anna Strasser, and Matthew Crosby set out to find out whether GPT-3 can replicate a human philosopher. GPT3 is a language model with billions of parameters trained on the broad internet data and exceeded its successors’ performance in many benchmarks. It is powerful because it resolves the need for many training data to get satisfactory results using few-shot learners. The team fine-tuned GPT-3 based on philosopher Daniel Dennet’s corpus. Fine-tuning is another way to interact with the model by using much more data to train the model.

OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher:

In this case, GPT-3 was trained on millions of words of Dennett’s about a variety of philosophical topics, including consciousness and artificial intelligence. With Dennet’s permission, the GPT-3 was “fine-tuned” with a majority of the philosopher’s writings. The team asked Dennet ten philosophical questions, then posed those same questions of the GPT-3.

The AI model was trained using answers from Dennett on a range of questions about free will, whether animals feel pain and even favorite bits of other philosophers. Even knowledgeable philosophers who are experts on Dan Dennett’s work have substantial difficulty distinguishing the answers created by this language generation program from Dennett’s own answers.

Ten philosophical questions were then posed to both the real Dennet and GPT-3 to see whether the AI could match its renowned human counterpart. Respondents were instructed to guess which of the five answers was Dennett’s own. After guessing, they were asked to rate each of the five answers on a five-point scale from “not at all like what Dennett might say” to “exactly like what Dennett might say”. They did this for all ten questions. This experiment is only the latest demonstration of how GPT-3 and rival artificial intelligence models can perform human conversational tasks, regardless of any philosophical questions of consciousness.

Despite the impressive performance by the GPT-3 version of Dennett, the point of the experiment wasn’t to demonstrate that the AI is self-aware, only that it can mimic a real person to an increasingly sophisticated degree and that OpenAI and its rivals are continuing to refine the models. So there we have it, GPT-3 is already able to convince most people – including experts in around half or more cases that it’s a human philosopher. An AI philosopher mimicking one or more humans doesn’t seem very far-fetched, though how original it could be in its musings is debatable.

More Trending Stories 
  • Forget Mobile Robots, Omnid Mocobots are Here to Change Manufacturing
  • Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers
  • The World is Heading to Decoding Animal Communication with AI
  • Is Data Science Still the Sexiest Job in 2022? The World Doubts it
  • Why it is High Time for Business Owners to Learn Ethical Hacking?
  • Top 10 User and Entity Behavior Analytics Tools to Know in 2022
  • Tesla Stocks vs Bitcoin! The EV Maker has Outperformed BTC

The post OpenAI’s GPT-3 Can Now Give You Philosopher-Level Gyan appeared first on .



GPT-3

GPT-3

A study of OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher

Research has discovered that OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher. GPT-3 is a neural network machine learning model trained using internet data to generate any type of text. Developed by OpenAI. GPT-3 has been used to create articles, poetry, stories, news reports, and dialogue using just a small amount of input text that can be used to produce large amounts of quality copy. GPT-3 is a powerful autoregressive language model that uses deep learning to produce human-like text.

The research team Eric Schwitzgebel, Anna Strasser, and Matthew Crosby set out to find out whether GPT-3 can replicate a human philosopher. GPT3 is a language model with billions of parameters trained on the broad internet data and exceeded its successors’ performance in many benchmarks. It is powerful because it resolves the need for many training data to get satisfactory results using few-shot learners. The team fine-tuned GPT-3 based on philosopher Daniel Dennet’s corpus. Fine-tuning is another way to interact with the model by using much more data to train the model.

OpenAI’s GPT-3 is capable of being indistinguishable from a human philosopher:

In this case, GPT-3 was trained on millions of words of Dennett’s about a variety of philosophical topics, including consciousness and artificial intelligence. With Dennet’s permission, the GPT-3 was “fine-tuned” with a majority of the philosopher’s writings. The team asked Dennet ten philosophical questions, then posed those same questions of the GPT-3.

The AI model was trained using answers from Dennett on a range of questions about free will, whether animals feel pain and even favorite bits of other philosophers. Even knowledgeable philosophers who are experts on Dan Dennett’s work have substantial difficulty distinguishing the answers created by this language generation program from Dennett’s own answers.

Ten philosophical questions were then posed to both the real Dennet and GPT-3 to see whether the AI could match its renowned human counterpart. Respondents were instructed to guess which of the five answers was Dennett’s own. After guessing, they were asked to rate each of the five answers on a five-point scale from “not at all like what Dennett might say” to “exactly like what Dennett might say”. They did this for all ten questions. This experiment is only the latest demonstration of how GPT-3 and rival artificial intelligence models can perform human conversational tasks, regardless of any philosophical questions of consciousness.

Despite the impressive performance by the GPT-3 version of Dennett, the point of the experiment wasn’t to demonstrate that the AI is self-aware, only that it can mimic a real person to an increasingly sophisticated degree and that OpenAI and its rivals are continuing to refine the models. So there we have it, GPT-3 is already able to convince most people – including experts in around half or more cases that it’s a human philosopher. An AI philosopher mimicking one or more humans doesn’t seem very far-fetched, though how original it could be in its musings is debatable.

More Trending Stories 
  • Forget Mobile Robots, Omnid Mocobots are Here to Change Manufacturing
  • Metaverse Headsets and Smart Glasses are the Next-gen Data Stealers
  • The World is Heading to Decoding Animal Communication with AI
  • Is Data Science Still the Sexiest Job in 2022? The World Doubts it
  • Why it is High Time for Business Owners to Learn Ethical Hacking?
  • Top 10 User and Entity Behavior Analytics Tools to Know in 2022
  • Tesla Stocks vs Bitcoin! The EV Maker has Outperformed BTC

The post OpenAI’s GPT-3 Can Now Give You Philosopher-Level Gyan appeared first on .

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment