Techno Blender
Digitally Yours.

What is Project Q*? The mysterious AI breakthrough, explained

0 46


Among the whirlwind of speculation around the sudden firing and reinstatement of OpenAI CEO Sam Altman, there’s been one central question mark at the heart of the controversy. Why was Altman fired by the board to begin with?

We may finally have part of the answer, and it has to do with the handling of a mysterious OpenAI project with the internal codename, “Q*” — or Q Star. Information is limited, but here’s everything we know about the potentially game-changing developments so far.

What is Project Q*?

Before moving forward, it should be noted that all the details about Project Q* — including its existence — comes from a single report. Reporters at Reuters said on November 22 that it had been given the information by “two people familiar with the matter.” According to the article, Project Q* was a new model that excelled in performing mathematics, something the current LLM (large language models) like ChatGPT struggled with. It was still reportedly only at the level of solving grade-school mathematics, but as a beginning point, it looked promising.

Don’t Miss:

Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly scary enough to prompt several staff researchers to write a letter to the board to raise the alarm about the project, claiming it could “threaten humanity.”

Beyond this, not much else is known about how big of a project Q* is, what its aims are, or how long it’s been in development.

Was Q* really why Sam Altman was fired?

OpenAI

From the very beginning of the speculation around the firing of Sam Altman, one of the chief suspects was his approach to safetyism. Altman was the one who pushed OpenAI to turn away from its roots as a non-profit and move toward commercialization. This started with the public launch of ChatGPT and the eventual roll-out of ChatGPT Plus, both of which kickstarted this new era of generative AI, causing companies like Google to go public with their technology as well.

The ethical and safety concerns around this technology being publicly available have always been present, despite all the excitement behind how it has already changed the world. Larger concerns about how fast the technology was developing have been well-documented as well, especially with the jump from GPT-3.5 to GPT-4. Some think the technology is moving too fast without enough regulation or oversight, and according to the Reuters report, “commercializing advances before understanding the consequences” was listed as one of the reasons for Altman’s initial firing.

Although we don’t know if Altman was specifically mentioned in the letter about Q* mentioned above, it’s also being cited as one of the reasons for the board’s decision to fire Altman — which has since been reversed.

It’s worth mentioning that just days before he was fired, Altman mentioned at an AI summit that he was “in the room” a couple of weeks earlier when a major “frontier of discovery” was pushed forward. The timing checks out that this may have been in reference to a breakthrough in Q*, and if so, would confirm Altman’s intimate involvement in the project.

Putting the pieces together, it seems like concerns about commercialization have been present since the beginning, and his treatment of Q* was merely the final straw. The fact that the board was so concerned about the rapid development (and perhaps Altman’s own attitude toward it) that it would fire its all-star CEO is shocking.

The fact that Altman is now back in charge leaves the current state of Q* and its future in question.

Is it really the beginning of AGI?

AGI, which stands for artificial general intelligence, is where OpenAI has been headed from the beginning. Though the term means different things to different people, OpenAI has always defined AGI as “autonomous systems that surpass humans in most economically valuable tasks,” as the Reuters report says. Nothing about that definition has reference to “self-aware systems,” which is often what presume AGI means.

Still, on the surface, advances in AI mathematics might not seem like a big step in that direction. After all, we’ve had computers helping us with math for many decades now. But the powers given to Q* aren’t just a calculator. Being literate in math requires humanlike logic and reasoning, and researchers seem to think it’s a big deal. With writing and language, an LLM is allowed to be more fluid in its answers and responses, often giving a wide range of answers to questions and prompts. But math is the exact opposite, where often there is just a single correct answer to a problem. The Reuters report suggests that AI researchers believe it could even be “applied to novel scientific research.”

Obviously, Q* seems to still be in the beginnings of development, but it does appear to be the biggest advancement we’ve seen since GPT-4. If the hype is to be believed, it should certainly be considered a major step in the road toward AGI, as defined by OpenAI. Depending on your perspective, that’s either cause for optimistic excitement or existential dread.

Editors’ Recommendations







Among the whirlwind of speculation around the sudden firing and reinstatement of OpenAI CEO Sam Altman, there’s been one central question mark at the heart of the controversy. Why was Altman fired by the board to begin with?

We may finally have part of the answer, and it has to do with the handling of a mysterious OpenAI project with the internal codename, “Q*” — or Q Star. Information is limited, but here’s everything we know about the potentially game-changing developments so far.

What is Project Q*?

Before moving forward, it should be noted that all the details about Project Q* — including its existence — comes from a single report. Reporters at Reuters said on November 22 that it had been given the information by “two people familiar with the matter.” According to the article, Project Q* was a new model that excelled in performing mathematics, something the current LLM (large language models) like ChatGPT struggled with. It was still reportedly only at the level of solving grade-school mathematics, but as a beginning point, it looked promising.

Don’t Miss:

Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly scary enough to prompt several staff researchers to write a letter to the board to raise the alarm about the project, claiming it could “threaten humanity.”

Beyond this, not much else is known about how big of a project Q* is, what its aims are, or how long it’s been in development.

Was Q* really why Sam Altman was fired?

Sam Altman at the OpenAI developer conference.
OpenAI

From the very beginning of the speculation around the firing of Sam Altman, one of the chief suspects was his approach to safetyism. Altman was the one who pushed OpenAI to turn away from its roots as a non-profit and move toward commercialization. This started with the public launch of ChatGPT and the eventual roll-out of ChatGPT Plus, both of which kickstarted this new era of generative AI, causing companies like Google to go public with their technology as well.

The ethical and safety concerns around this technology being publicly available have always been present, despite all the excitement behind how it has already changed the world. Larger concerns about how fast the technology was developing have been well-documented as well, especially with the jump from GPT-3.5 to GPT-4. Some think the technology is moving too fast without enough regulation or oversight, and according to the Reuters report, “commercializing advances before understanding the consequences” was listed as one of the reasons for Altman’s initial firing.

Although we don’t know if Altman was specifically mentioned in the letter about Q* mentioned above, it’s also being cited as one of the reasons for the board’s decision to fire Altman — which has since been reversed.

It’s worth mentioning that just days before he was fired, Altman mentioned at an AI summit that he was “in the room” a couple of weeks earlier when a major “frontier of discovery” was pushed forward. The timing checks out that this may have been in reference to a breakthrough in Q*, and if so, would confirm Altman’s intimate involvement in the project.

Putting the pieces together, it seems like concerns about commercialization have been present since the beginning, and his treatment of Q* was merely the final straw. The fact that the board was so concerned about the rapid development (and perhaps Altman’s own attitude toward it) that it would fire its all-star CEO is shocking.

The fact that Altman is now back in charge leaves the current state of Q* and its future in question.

Is it really the beginning of AGI?

AGI, which stands for artificial general intelligence, is where OpenAI has been headed from the beginning. Though the term means different things to different people, OpenAI has always defined AGI as “autonomous systems that surpass humans in most economically valuable tasks,” as the Reuters report says. Nothing about that definition has reference to “self-aware systems,” which is often what presume AGI means.

Still, on the surface, advances in AI mathematics might not seem like a big step in that direction. After all, we’ve had computers helping us with math for many decades now. But the powers given to Q* aren’t just a calculator. Being literate in math requires humanlike logic and reasoning, and researchers seem to think it’s a big deal. With writing and language, an LLM is allowed to be more fluid in its answers and responses, often giving a wide range of answers to questions and prompts. But math is the exact opposite, where often there is just a single correct answer to a problem. The Reuters report suggests that AI researchers believe it could even be “applied to novel scientific research.”

Obviously, Q* seems to still be in the beginnings of development, but it does appear to be the biggest advancement we’ve seen since GPT-4. If the hype is to be believed, it should certainly be considered a major step in the road toward AGI, as defined by OpenAI. Depending on your perspective, that’s either cause for optimistic excitement or existential dread.

Editors’ Recommendations






FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment