The intelligence explosion: Nick Bostrom on the future of AI
Big Think Big Think
6.79M subscribers
296,039 views
0

 Published On Apr 9, 2023

We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains.

Subscribe to Big Think on YouTube ►    / @bigthink  
Up next, Is AI a species-level threat to humanity? With Elon Musk, Michio Kaku, Steven Pinker & more ►    • Is AI a species-level threat to human...  

Nick Bostrom, a professor at the University of Oxford and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that, in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility. 

Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty. 

Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future.

0:00 Smarter than humans
0:57 Brains: From organic to artificial
1:39 The birth of superintelligence
2:58 Existential risks
4:22 The future of humanity

Read the video transcript ► https://bigthink.com/series/the-big-t...

----------------------------------------------------------------------------------

About Nick Bostrom:
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.

He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002).

Bostrom’s academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.
----------------------------------------------------------------------------------

Read more of our stories on artificial intelligence:
Concern trolling: the fear-based tactic that derails progress, from AI to biotech
► https://bigthink.com/starts-with-a-ba...
People destroyed printing presses out of fear. What will we do to AI?
► https://bigthink.com/the-past/printin...
I signed the “pause AI” letter, but not for the reasons you think
► https://bigthink.com/13-8/pause-ai-le...

----------------------------------------------------------------------------------

Want more Big Think?
► Daily editorial features: https://bigthink.com/?utm_source=yout...
► Get the best of Big Think right to your inbox: https://bigthink.com/subscribe/?utm_s...
► Facebook: https://bigth.ink/facebook/?utm_sourc...
► Instagram: https://bigth.ink/Instagram/?utm_sour...
► Twitter: https://bigth.ink/twitter/?utm_source...

show more

Share/Embed