As someone who worked as a software engineer in Silicon Valley for many years (now retired), I can tell you precisely why many of the AI company leaders and staff don't take people like Eliezer seriously. Venture capital selects for extreme optimists. Just think about it. Do investors give money to people who say "AI is hard, dangerous, and we need to spend much of our effort on safety", OR do they give money to people who say "We can create AGI quickly. Safety is easy. We don't need to spend much effort on safety. We need to accelerate as fast as possible on capabilities and ship products!. We can make trillions if we are first to market!" Realists and pessimists don't get venture capital. Extreme optimists do. With that in mind, the many decades I've spent observing humans on this earth have taught me an important lesson about optimism/pessimism. It is one of THE most consequential filters and biases that we have. EVERYTHING gets filtered through the optimism/pessimism lens. Confirmation bias is an extreme problem for all of us.
I'm not happy about the imminent apocalypse but I am happy to see that Eliezer has procured a more fashionable hat. Looks good.
Great interview. You can tell Eliezer has done a lot of thinking since his last public interview, and the concepts are put together in an easier-to-digest format. Thanks, Robinson
Robinson gets 1" closer to his guest every interview.
You can preorder Yudkowsky's new book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us". I do think that getting the book on the bestseller list would help getting people to talk about this threat, and preorders help with that.
I have listened to many interviews of Yudkowsky but this is the first one where I feel his arguments are clear and self–evident. He is not pushing his own scenarios and invites us to think for ourselves and it helps to go through the mental exercise.
When people ask "why would AI not being aligned mean humans would be dead? Why would it want to kill us?" they assume human existence and prosperity is the default state of the universe. But actually, the conditions that allow humans to exist and thrive are incredibly narrow and specific. An super intelligent AI would just need to optimize for literally anything else other than this incredibly fragile set of conditions we depend on for it to spell catastrophe.
Respect for actually trying your best to role-play as an AI - Lex Fridman was incapable of that when interviewing Eliezer. On the same topic, I recommend inviting Daniel Kokotajlo and interviewing him about his "AI 2027" scenario.
Eliezer has upped his analogy game and I'm loving it!
One walk through a shopping mall, airport, or large gathering of people is enough to lose any hope AI will spare us… we haven’t evolved much past our knuckle dragging ancestors and any superior intelligence would conclude we’re not worthy of saving.
This really sucks, I just bought a belt with a 100 year guarantee, and for what?!
Best Yudkowsky interview yet, especially at the end there.
This needs 8 Billion views
What humanity is scared of is how we treat animals...
Thank you Robinson for bringing us EY's best interview so far, and for being such a good interviewer.
There are currently too many people heavily invested in selling "shovels and pans" related to the "AI goldrush" for them to admit that AI might represent a danger in the long tail perspective for them to miss out on the short tail profit.
To your average person he sounds crazy. But people really need to realize that he's not talking about chatGPT killing us. He's looking ahead and seeing what will exist in 5-10 years
More people should be talking about this and be willing to do it publicly and take it seriously. Thanks for having the courage to be one of those people, Robinson. I think expanding the Overton window on this is good, and this conversation likely helps with that.
It's both astonishing and sad at the same time that, judging by the comments, most people are either incapable of or unwilling to engage with Elizer's arguments with enough intellectual honesty and integrity to at least critique it with a decent argument. It's sad that most people resort to either ridicule or ad hominem attacks instead of engaging with the material of the argument. This behavior is like a child's magical thinking in my opinion; that if you ignore a problem for long enough, then it must be that it'll go away.
@dhsubhadra