In a November 23 report, Reuters cited two sources as saying that the letter had never been mentioned before and that the AI algorithm was a major development before Sam Altman, co-founder and CEO of OpenAI, was fired on November 17. According to these sources, the letter was one factor in the list of reasons for the decision to fire OpenAI's board of directors.
The researchers who wrote and sent the letter did not immediately respond to a request for comment.
Altman made a triumphant return late on November 21, after more than 700 OpenAI employees threatened to quit and join Microsoft along with the fired CEO. This ended nearly a week of turmoil with a series of unexpected developments at OpenAI, one of the most prominent AI research companies in the world today and owner of the popular ChatGPT application.
Mr. Altman at an APEC event in the US on November 16.
According to one of the sources, one of OpenAI's longtime senior managers, Mira Murati, mentioned a project called Q* (pronounced "Q Star") to employees on November 22 and said a letter had been sent to the company's board of directors before the global tech industry's shockwaves last weekend.
After the story was reported, an OpenAI spokesperson said Ms. Murati had told employees what the media was going to report, but she would not comment on the accuracy of the information.
Sam Altman returns as CEO of OpenAI
One of the sources revealed that OpenAI has made progress on Project Q*. Some people at the company believe that the project could be a breakthrough in OpenAI's quest for superintelligence, also known as artificial general intelligence (AGI). The company defines AGI as an AI system that is smarter than humans.
With its vast computing resources, the new model can solve some problems, according to the source. Although the model only does math at the elementary school level, the ability to solve such problems makes researchers very optimistic about Q*'s future success, the source said.
Researchers see mathematics as a frontier in the development of generative AI. Currently, generative AI can write and translate between languages, although the answers to the same questions can be very different. But mastering mathematics—a field where there is only one right answer—implies that AI will become better at reasoning like human intelligence. AI researchers believe this could be applied to new scientific research.
Unlike a computer that can only solve a limited set of calculations, AGI can generalize, learn, and understand problems. In a letter to the OpenAI board, the researchers highlighted the power and potential dangers of AI, according to sources. Computer scientists have long discussed the dangers posed by superintelligent machines, such as whether they might decide to destroy humanity for their own benefit.
In that context, Mr. Altman led efforts to turn ChatGPT into one of the fastest-growing software applications in history and attract the investment — and computing resources — from Microsoft needed to move closer to generalized superintelligence, or AGI.
In addition to announcing a slew of new tools at an event this month, Mr. Altman told world leaders in San Francisco last week that he believes AGI is within reach.
A day later, OpenAI's board fired Mr. Altman.
Source link
Comment (0)