Make a Living ClubMake a Living Club
  • Home
  • News
  • Business
  • Finance
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • More
    • Economy
    • Politics
    • Real Estate
Trending Now

Christmas Cash Flow: 3 High-Yield Stocking Stuffers Under $10

December 20, 2025

Paychex, Inc. 2026 Q2 – Results – Earnings Call Presentation (NASDAQ:PAYX) 2025-12-19

December 19, 2025

Trulieve Cannabis: Cash-Generative Platform With Schedule III Optionality (OTCMKTS:TCNNF)

December 18, 2025

Maui Land & Pineapple: Rate Cuts Should Help Real Estate Plays (MLP)

December 16, 2025

HAP: An Option To Consider If Inflation And Commodities Rise In 2026 (NYSEARCA:HAP)

December 15, 2025

Brussels imposes sanctions on oil trader Murtaza Lakhani over Russia allegations

December 15, 2025
Facebook Twitter Instagram
  • Privacy
  • Terms
  • Press
  • Advertise
  • Contact
Facebook Twitter Instagram
Make a Living ClubMake a Living Club
  • Home
  • News
  • Business
  • Finance
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • More
    • Economy
    • Politics
    • Real Estate
Sign Up for News & Alerts
Make a Living ClubMake a Living Club
Home » Sam Altman warns AI could kill us all. But he still wants the world to use it
Business

Sam Altman warns AI could kill us all. But he still wants the world to use it

Press RoomBy Press RoomNovember 1, 2023
Facebook Twitter Pinterest LinkedIn WhatsApp Email

Sam Altman thinks the technology underpinning his company’s most famous product could bring about the end of human civilization.

In May, OpenAI CEO Sam Altman filed into a Senate subcommittee hearing room in Washington, DC, with an urgent plea to lawmakers: Create thoughtful regulations that embrace the powerful promise of artificial intelligence – while mitigating the risk that it overpowers humanity. It was a defining moment for him and for the future of AI.

With the launch of OpenAI’s ChatGPT late last year, Altman, 38, emerged overnight as the poster child for a new crop of AI tools that can generate images and texts in response to user prompts, a technology called generative AI. Not long after its release, ChatGPT became a household name almost synonymous with AI itself. CEOs used it to draft emails, people built websites with no prior coding experience, and it passed exams from law and business schools. It has the potential to revolutionize nearly every industry, including education, finance, agriculture and healthcare, from surgeries to medicine vaccine development.

But those same tools have raised concerns about everything from cheating in schools and displacing human workers – even an existential threat to humanity. The rise of AI, for example, has led economists to warn of a labor market. As many as 300 million full-time jobs around the world could eventually be automated in some way by generative AI, according to Goldman Sachs estimates. About 14 million positions could disappear in the next five years alone, according to an April report by the World Economic Forum.

In his testimony before Congress, Altman said the potential for AI to be used to manipulate voters and target disinformation were among “my areas of greatest concern.”

Two weeks after the hearing, Altman joined hundreds of top AI scientists, researchers and business leaders in signing a letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. It also highlighted an important dynamic in Silicon Valley: Top executives at some of the biggest tech companies are telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

Although Altman, a longtime entrepreneur and Silicon Valley investor, largely stayed out of the spotlight in prior years, eyes have shifted to him in recent months as the poster child for the AI revolution. This has also exposed him to litigation, regulatory scrutiny and both praise and condemnation around the world.

That day in front of the Senate subcommittee, however, Altman described the technology’s current boom as a pivotal moment.

ChatGPT's website displayed on a laptop screen in Milan, Italy, on February 21, 2023.

“Is [AI] gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty?” he said. “Or is it gonna be more like the atom bomb – huge technological breakthrough, but the consequences (severe, terrible) continue to haunt us to this day?”

Altman has long presented himself as someone who is mindful of the risks posed by AI, and he has pledged to move forward responsibly. He is one of several tech CEOs to meet with White House leaders, including Vice President Kamala Harris and President Joe Biden, to emphasize the importance of ethical and responsible AI development.

Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, and dozens of tech leaders, professors and researchers in urged artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” (At the same time, some experts questioned if those who signed the letter sought to maintain their competitive edge over other companies.)

Altman said he agreed with parts of the letter, including that “the safety bar has got to increase,” but said a pause would not be an “optimal way” to address the challenges.

Still, OpenAI has its foot placed firmly on the gas pedal. Most recently, OpenAI and iPhone designer Jony Ive have reportedly been in talks to raise $1 billion from Japanese conglomerate SoftBank for an AI device to replace the smartphone.

Kyunghyun Cho, professor of computer science and data science at New York University, from left, JP Lee, chief executive officer of Softbank Ventures Asia, Greg Brockman, president and co-founder of OpenAI, and Sam Altman, chief executive officer of OpenAI, during a fireside chat organized by Softbank Ventures Asia in Seoul, South Korea, on Friday, June 9, 2023. OpenAI is focused on building a better, faster and cheaper model of its generative AI ChatGPT product, Altman has said previously. The product made AI a buzzword and kicked off a global race among tech companies to build their own versions of the chatbot technology.

Those who know Altman have described him as someone who makes prescient bets and has even been called “a startup Yoda” or the “Kevin Bacon of Silicon Valley,” having worked with seemingly everyone in the industry. Aaron Levie, the CEO of enterprise cloud company Box and a longtime friend of Altman who came up with him in the startup world, told CNN that Altman is “introspective” and wants to debate ideas, get different points of view and endlessly encourages feedback on whatever he’s working on.

“I’ve always found him to be incredibly self-critical on ideas and willing to take any kind of feedback on any topic that he’s been involved with over the years,” Levie said.

But Bern Elliot, an analyst at Gartner Research, noted the famous cliché: There’s a risk to putting all your eggs in one basket, no matter how much trust you may place in it.

“Many things can happen to one basket,” he added.

When starting OpenAI, Altman told CNN in 2015 he wanted to steer the path of AI, rather than worrying about the potential harms and doing nothing. “I sleep better knowing I can have some influence now,” he said.

Despite his leadership status, Altman says he remains concerned about the technology. “I prep for survival,” he said in a 2016 profile in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”

“I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to,” he said.

Some AI industry experts say, however, that focusing attention on far-off apocalyptic scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities. Rowan Curran, an analyst at market research firm Forrester, acknowledged the legitimate concerns around making sure training data, particularly for enormous models, has minimal bias – or has a bias that is understood and can be mitigated.

“The idea of an ‘AI apocalypse’ as a realistic scenario that presents any kind of danger to humanity – particularly in the short and medium term, is just speculative techno-mythology,” he said. “The continued focus on this as one of the big risks that comes along with advancement of AI distracts from the very real challenges we have today to reduce current and future harms from data and models being applied unjustly by human actors.”

In perhaps the biggest sweeping effort to date, President Biden unveiled an executive order earlier this week that will require developers of powerful AI systems to share results of their safety tests with the federal government before they are released to the public, if they pose national security, economic or health risks.

OpenAI CEO Sam Altman addresses a speech during a meeting, at the Station F in Paris on May 26, 2023. Altman, the boss of OpenAI, the firm behind the massively popular ChatGPT bot, said on May 26, 2023, in Paris that his firm's technology would not destroy the job market as he sought to calm fears about the march of artificial intelligence (AI).

Following the Senate hearing, Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, expressed concerns over what a future looks like with AI even if it’s heavily regulated. “If they honestly believe that this could be bringing about human extinction, then why not just stop?” she said.

Margaret O’Mara, a tech historian and professor at the University of Washington, said good policymaking should be informed by multiple perspectives and interests, not just by one or few people, and shaped with the public interest in mind.

“The challenge with AI is that only a very few people and firms really understand how it works and what the implications are of its use,” said O’Mara, noting similarities to the world of nuclear physics before and during the Manhattan Project’s development of the atomic bomb.

Still, O’Mara said many people across the tech industry are rooting for Altman to be the force to revolutionize society with AI but make it safe.

“This time is akin to what Gates and Jobs did for the personal computing moment of the early 1980s, and the software moment of the 1990,” she said. “There’s a real hope that we can have tech that makes things better, if the people who are making it are good people, smart and care about the right things. Sam embodies that for AI right now.”

The world is counting on Altman to act in the best interest of humanity with a technology by his own admission could be a weapon of mass destruction. Although he may be a smart and qualified leader, he’s still just that: one person.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Brussels imposes sanctions on oil trader Murtaza Lakhani over Russia allegations

Business December 15, 2025

At least 11 people killed in terror attack on Jewish festival at Sydney’s Bondi Beach

Business December 14, 2025

Trump’s immigration data dragnet

Business December 10, 2025

The power crunch threatening America’s AI ambitions

Business December 8, 2025

Fed expected to cut rates despite deep divisions over US economic outlook

Business December 7, 2025

The housing crisis is pushing Gen Z into crypto and economic nihilism

Business November 28, 2025
Add A Comment

Leave A Reply Cancel Reply

Latest News

Paychex, Inc. 2026 Q2 – Results – Earnings Call Presentation (NASDAQ:PAYX) 2025-12-19

December 19, 2025

Trulieve Cannabis: Cash-Generative Platform With Schedule III Optionality (OTCMKTS:TCNNF)

December 18, 2025

Maui Land & Pineapple: Rate Cuts Should Help Real Estate Plays (MLP)

December 16, 2025

HAP: An Option To Consider If Inflation And Commodities Rise In 2026 (NYSEARCA:HAP)

December 15, 2025

Brussels imposes sanctions on oil trader Murtaza Lakhani over Russia allegations

December 15, 2025
Trending Now

Invesco Charter Fund Q3 2025 Portfolio Positioning And Performance Highlights

December 14, 2025

At least 11 people killed in terror attack on Jewish festival at Sydney’s Bondi Beach

December 14, 2025

Wall Street Roundup: Market Reacts To Earnings

December 12, 2025

Subscribe to Updates

Get the latest sports news from SportsSite about soccer, football and tennis.

Make a Living is your one-stop news website for the latest personal finance, investing and markets news and updates, follow us now to get the news that matters to you.

We're social. Connect with us:

Facebook Twitter Instagram YouTube LinkedIn
Topics
  • Business
  • Economy
  • Finance
  • Investing
  • Markets
Quick Links
  • Cookie Policy
  • Advertise with us
  • Get in touch
  • Submit News
  • Newsletter

Subscribe to Updates

Get the latest finance, markets, and business news and updates directly to your inbox.

2025 © Make a Living Club. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.