Jump to content

OpenAI researchers warned board of AI breakthrough threat to humanity


hyperbole

Recommended Posts

Reason for firing CEO, then fired the board to bring back the CEO. Something crazy going on and US government needs to step in before it is too late

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Anna TongNovember 23, 20231:52 AM PSTUpdated 10 hours ago

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

'VEIL OF IGNORANCE'

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

 
 

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

 
 

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022. 

 
 

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

Link to comment
Share on other sites

  • hyperbole changed the title to OpenAI researchers warned board of AI breakthrough threat to humanity

This is very serious matter.

The exponential rate at which this things can improve is unfathomable.

Example, google started working on building AI that could play the notoriously hard board game go. This was on early 2010s. After almost 5 years of development, their program beat the world’s top go player 4-1 in 2016.

This was considered a landmark achievement for AI. I took google 6 years to get to that point. Next, they built an AI that could play with this alpha go, and in 1 day, it trained itself so well, it beat alpha go 100-0. All they did was get the 2 AIs to play against each other and they could play 1000s of games an hour.

Alpha go needed 6 years of development to beat the best player in the world 4-1. The next AI played against alpha go and beat it 100-0, by training in one day.

The rate of improvement is almost a step function. It’s insane

hence the If AI can teach itself math and actually understand what it’s learning, the difference between it going from grade school to super genius math can be measured in CPU cycles. Give it more processing power and it will get there quicker.

If it becomes a super genius at math that’s when things get scary for us as a humanity 

Link to comment
Share on other sites

33 minutes ago, hyperbole said:

This is very serious matter.

The exponential rate at which this things can improve is unfathomable.

Example, google started working on building AI that could play the notoriously hard board game go. This was on early 2010s. After almost 5 years of development, their program beat the world’s top go player 4-1 in 2016.

This was considered a landmark achievement for AI. I took google 6 years to get to that point. Next, they built an AI that could play with this alpha go, and in 1 day, it trained itself so well, it beat alpha go 100-0. All they did was get the 2 AIs to play against each other and they could play 1000s of games an hour.

Alpha go needed 6 years of development to beat the best player in the world 4-1. The next AI played against alpha go and beat it 100-0, by training in one day.

The rate of improvement is almost a step function. It’s insane

hence the If AI can teach itself math and actually understand what it’s learning, the difference between it going from grade school to super genius math can be measured in CPU cycles. Give it more processing power and it will get there quicker.

If it becomes a super genius at math that’s when things get scary for us as a humanity 

Em kaad le server power off cheddam

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...