Techno Blender
Digitally Yours.

The Exponential Growth of Generative AI

0 48


While the opportunities offered by generative AI are significant, there are also major challenges, such as the difficulty and cost of developing or maintaining large language models (LLMs), as well as their potential inaccuracy.

The popularity of generative AI is growing increasingly. Artificial Intelligence is now a serious subject of conversation in every corner, from dinner parties to news channels or digital transformation actors. Of course, Artificial Intelligence technologies in general, and more specifically ChatGPT, did not come out of nowhere. As far back as 2020, the most enlightened experts were already predicting that generative AI would be an essential pillar of the next generation of AI. 

“Today’s machine learning models mostly interpret and classify existing data: for instance, recognizing faces or identifying fraud. Generative AI is a fast-growing new field that focuses instead on building AI that can generate its own novel content” – Forbes “The Next Generation Of Artificial Intelligence ” Oct 2020

Recent work in all areas of AI is producing a favorable acceleration for generative AI. The next generation of LLMs is already being developed in start-ups, tech giants, and AI research groups.

Models That Can Generate Their Own Training Data

A new avenue of AI research is exploring how LLMs can generate their own training data to improve their performance. The idea is to draw inspiration from the way humans learn on their own when thinking about a topic. Research efforts at Google have already built an LLM that can generate questions, produce answers, filter high-quality results, and refine selected answers. Indeed, a method known as Language Model Self-Improved (LMSI) has been introduced by researchers from Google and UIUC (University of Illinois Urbana-Champaign). This approach is about fine-tuning LLM using a dataset created by the model itself.

The performance of an LLM may improve by generating its own natural language instructions and adapting to those instructions. Research by Google and Carnegie Mellon University also shows that LLMs can provide more accurate answers if they “first recite what they know about a topic before responding” in the same way that humans think before sharing a point of view. 

Recent advancements in language modeling have shown that using LLMs can dramatically elevate the performance of Natural Language Processing (NLP) applications. However, this can be challenging due to the models’ large size, which may require a lot of memory and CPU power to train. 

In order to unlock the true potential of language modeling NVIDIA and Microsoft are working on an automatic NLP model called Megatron-Turing Natural Language Generation (MT-NLG). It’s made of 530 billion parameters, which is more than 2 times bigger than the Open AI GPT-3 NLP model that is the benchmark today.

While this model appears to overcome some of the obstacles of automated NLP, it still needs to be improved. NVIDIA and MS note that while these LLMs represent a great leap forward in language generation, they still have flaws and biases. The researchers’ observations reveal that the model can perpetuate stereotypes and biases present in the data it was trained on. This leads to the area of data collection, analysis, modeling, and supervised training.

Models That Can Verify Facts on Their Own

Generative AI models use data from the Internet to train the models to produce predictions in response to user requests. However, there is no guarantee that the predictions will be 100% accurate or unbiased. Also, it can be difficult to know where the information that fed the system’s response came from. 

The use of generative AI, therefore, raises moral, legal, and ethical issues which have a potential impact on business. The concern is about the ownership of the content or simply the risk of producing “invented” answers. As such, it’s wise to be cautious about how one uses the information produced by generative AI in the short term.

Current LLMs (large language models) or LaMDA (Language Model for Dialogue Applications) may produce inaccurate or false information. See below the now-famous Google Bard’s wrong assertion about Webb Telescope.

– What new discoveries from the James Webb Space Telescope can I tell my nine-year-old about? 

– Webb took the very first pictures of exoplanets or planets outside the solar system.

However, new capabilities are being developed to work around this problem. It includes the ability for LLMs to extract information from external sources and provide references for the information they provide. For example, OpenAI WebGPT improves the factual accuracy of language models through web browsing.

The paper “Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback” by Microsoft Research and Columbia University proposes a system called LLM-AUGMENTER. It helps to use LLMs in mission-critical applications. The system improves the accuracy of LLM-generated responses by integrating external knowledge from task-specific databases. Iterative prompt revision can be used to enhance the accuracy and reliability of responses. The system has been tested in dialogue, and question-answering scenarios, where it appears to reduce false information without affecting the quality of the response.

LLMs are considered to have increased in size by a factor of 10 each year in recent years. The good news is that as these models grow in complexity and size, their capabilities also grow. However, LLMs are difficult and expensive to develop and maintain. Thus, their cost and inaccuracy are major challenges that must be addressed if they are to reach their full potential. 

Summary

Generative AI that focuses on creating AI that can produce its own content is a rapidly growing field. Recent advances in all areas of AI are producing a favorable acceleration for generative AI, including the development of models capable of generating their own training data to improve their performance and models capable of performing self-verification of facts. 

LLMs are complex to develop and maintain, and their cost and inaccuracy are still major challenges. But without a doubt, the efforts of the major technology and research players will lead to an increase in the capabilities of these systems, which will quickly reach their potential. 


While the opportunities offered by generative AI are significant, there are also major challenges, such as the difficulty and cost of developing or maintaining large language models (LLMs), as well as their potential inaccuracy.

The popularity of generative AI is growing increasingly. Artificial Intelligence is now a serious subject of conversation in every corner, from dinner parties to news channels or digital transformation actors. Of course, Artificial Intelligence technologies in general, and more specifically ChatGPT, did not come out of nowhere. As far back as 2020, the most enlightened experts were already predicting that generative AI would be an essential pillar of the next generation of AI. 

“Today’s machine learning models mostly interpret and classify existing data: for instance, recognizing faces or identifying fraud. Generative AI is a fast-growing new field that focuses instead on building AI that can generate its own novel content” – Forbes “The Next Generation Of Artificial Intelligence ” Oct 2020

Recent work in all areas of AI is producing a favorable acceleration for generative AI. The next generation of LLMs is already being developed in start-ups, tech giants, and AI research groups.

Models That Can Generate Their Own Training Data

A new avenue of AI research is exploring how LLMs can generate their own training data to improve their performance. The idea is to draw inspiration from the way humans learn on their own when thinking about a topic. Research efforts at Google have already built an LLM that can generate questions, produce answers, filter high-quality results, and refine selected answers. Indeed, a method known as Language Model Self-Improved (LMSI) has been introduced by researchers from Google and UIUC (University of Illinois Urbana-Champaign). This approach is about fine-tuning LLM using a dataset created by the model itself.

The performance of an LLM may improve by generating its own natural language instructions and adapting to those instructions. Research by Google and Carnegie Mellon University also shows that LLMs can provide more accurate answers if they “first recite what they know about a topic before responding” in the same way that humans think before sharing a point of view. 

Recent advancements in language modeling have shown that using LLMs can dramatically elevate the performance of Natural Language Processing (NLP) applications. However, this can be challenging due to the models’ large size, which may require a lot of memory and CPU power to train. 

In order to unlock the true potential of language modeling NVIDIA and Microsoft are working on an automatic NLP model called Megatron-Turing Natural Language Generation (MT-NLG). It’s made of 530 billion parameters, which is more than 2 times bigger than the Open AI GPT-3 NLP model that is the benchmark today.

While this model appears to overcome some of the obstacles of automated NLP, it still needs to be improved. NVIDIA and MS note that while these LLMs represent a great leap forward in language generation, they still have flaws and biases. The researchers’ observations reveal that the model can perpetuate stereotypes and biases present in the data it was trained on. This leads to the area of data collection, analysis, modeling, and supervised training.

Models That Can Verify Facts on Their Own

Generative AI models use data from the Internet to train the models to produce predictions in response to user requests. However, there is no guarantee that the predictions will be 100% accurate or unbiased. Also, it can be difficult to know where the information that fed the system’s response came from. 

The use of generative AI, therefore, raises moral, legal, and ethical issues which have a potential impact on business. The concern is about the ownership of the content or simply the risk of producing “invented” answers. As such, it’s wise to be cautious about how one uses the information produced by generative AI in the short term.

Current LLMs (large language models) or LaMDA (Language Model for Dialogue Applications) may produce inaccurate or false information. See below the now-famous Google Bard’s wrong assertion about Webb Telescope.

– What new discoveries from the James Webb Space Telescope can I tell my nine-year-old about? 

– Webb took the very first pictures of exoplanets or planets outside the solar system.

However, new capabilities are being developed to work around this problem. It includes the ability for LLMs to extract information from external sources and provide references for the information they provide. For example, OpenAI WebGPT improves the factual accuracy of language models through web browsing.

The paper “Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback” by Microsoft Research and Columbia University proposes a system called LLM-AUGMENTER. It helps to use LLMs in mission-critical applications. The system improves the accuracy of LLM-generated responses by integrating external knowledge from task-specific databases. Iterative prompt revision can be used to enhance the accuracy and reliability of responses. The system has been tested in dialogue, and question-answering scenarios, where it appears to reduce false information without affecting the quality of the response.

LLMs are considered to have increased in size by a factor of 10 each year in recent years. The good news is that as these models grow in complexity and size, their capabilities also grow. However, LLMs are difficult and expensive to develop and maintain. Thus, their cost and inaccuracy are major challenges that must be addressed if they are to reach their full potential. 

Summary

Generative AI that focuses on creating AI that can produce its own content is a rapidly growing field. Recent advances in all areas of AI are producing a favorable acceleration for generative AI, including the development of models capable of generating their own training data to improve their performance and models capable of performing self-verification of facts. 

LLMs are complex to develop and maintain, and their cost and inaccuracy are still major challenges. But without a doubt, the efforts of the major technology and research players will lead to an increase in the capabilities of these systems, which will quickly reach their potential. 

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment