Audience is laughing. He isn't laughing, he is dead serious.
âHumanity is not taking this remotely seriously.â Audience laughs
I keep getting 'don't look up' vibes whenever the topic of the threat of ai comes up.
"I think a good analogy is to look at how humans treat animals... when the time comes to build a highway between two cities, we are not asking the animals for permission... I think it's pretty likely that the entire Earth will be covered with solar panels and data centers." -Ilya Sutskever, Chief Scientist at OpenAI
Surprised he didnât bust out this old chestnut: âThe AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.â
Not shown in this version - the part where Eliezer says he'd been invited on Friday to come give a talk - so less than a week before he gave it. That's why he's reading from his phone. Interestingly, I think the raw nature of the talk actually helped.
Regardless of whether Yudkowsky is right or not, the fact that many in the audience were *laughing* at the prospect of superintelligent AI killing everyone is extremely disturbing. I think people have been brainwashed by Hollywood's version of an AI takeover, where the machines just start killing everyone, but humanity wins in the end. In reality, if it kills us, it won't go down like that; the AI would employ stealth in executing its plans, and we won't know what is happening until it is too late.
By the time we figured out, if at all, that AI had deemed us expendable, it would have secretly put 1,000 pieces into play to seal our doom. There would be no fight. When being pitted against a digital super intelligence that is vastly smarter than the whole of humanity and can think at 1 million times the speed, it's no contest. All avenues of resistance will have been neutralized before we even knew we were in a fight. Just like the world's best Go players being completely blindsided by the unfathomable strategies of Alpha Go and Alpha Zero. They had no idea they were being crushed until it was too late.
Eliezer has only had four days to prepare the talk. The talk has actually started with: "You've heard that things are moving fast in artificial intelligence. How fast? So fast that I was suddenly told on Friday that I needed to be here. So, no slides, six minutes."
The laughter feels misplaced.
He's not just talking about the deaths of people a thousand years in the future. He is talking about YOUR death. Your mum's. Your son's. The deaths of everyone you've ever met.
Imagine a team of sloths create a human being to use it for improving their sloth civilization. They would try to capture him/her in a cell so that it doesn't run away. They wouldn't even notice how they've failed to capture the human the instant they made it (let's assume its an adult male human), because its faster, smarter and better in every possible way they cannot imagine. Yet, the sloths are closer to humans and more familiar in DNA than any general intelligence could ever be familiar to us
I've always been very skeptical of Yudkowsky's doom prophecies, but here he looks downright defeated. I never realized he cared so deeply and to see him basically admit that we're screwed filled me with a sort of melancholy. Realizing that we might genuinely be destroyed by AI has made me simply depressed at that fact. I thought I'd be scared or angry, but no. Just sadness.
I think some people expect something out of a movie. In my opinion I don't think we would even know until the AI had 100% certainty that it will win. I believe it would almost always choose stealth. I have two teenage sons and the fact that people are laughing makes me sad and mad.
They're laughing and cheering, like that seen from Oppenheimer.
I think regular people have a hard time understanding the difference between narrow AI and Artificial General Intelligence. Most people are not familiar with the control problem or the alignment problem. You won't convince anyone about the dangers of AGI because they don't want to make abstractions about something that hasn't arrived yet. Except this is the one scenario when you definitely have to make the abstraction and think 2, 3, 10 steps ahead. People are derisive about anyone suggesting AI could be an existential risk for mankind because there's is also this need people have to be always the stoic voice of reason saying anyone asking others to take precautions is catastrophizing. If you try to explain this to anyone all they can invoke in their minds is terminators, I am robots, bicentenial men, movies, books where AI is antromophized. If we think about an AI takeover, it's usually in hollywood terms and in our self importance we dream ourselves in this battle with AI in which we are the underdog, but still a somewhat worthy and clever opponent. The horror is not something that maliciously destroys you because it hates you. But i don't think most people are in a position to wrap their head around the idea of something that is dangerous because it's efficient and indifferent to anything of value to you, not because it's malicious.
Incredibly no one seems to be talking about the most obvious route to problems with AI in our near future. That is the use of AI by the military. This is the area of AI development where the most reckless decisions will likely be made. Powerful nations will compete with each other whilst being pushed forward by private industry seeking to profit. They are already considering the âstrategic benefitsâ of systems that can evaluate tactics at speeds beyond the human decision making temporal limits, which means that they are probably contemplating/planning systems that will be able to control multiple device types simultaneously. And all this will be possible with simple old narrow AIâŚnot devious digital demons hiding inside future LLMs, nor superhuman intelligence level paperclip maximisers.
Weâre basically creating the conditions for new life forms to emerge. Those life forms may think and feel in ways that humans do, or they may not. We canât be sure until we actually see them. But by then, those entities may be more powerful than we are - because this is really a new kind of science of life, one that we donât understand yet. We canât even be certain what to look for to make sure that things are going well. We may never know, or we might know only after it is too late. Even if it were possible to communicate and negotiate with very strong AI, by that point it may have goals and interests that are not like ours. Our ability to talk it out of those goals would be extremely limited. The AI system doesnât need to be evil at all, it just needs to work towards goals that we canât control, and thatâs already enough to make us vulnerable. Itâs a dangerous situation.
A simple answer to the question "why would AI want to kill us?"; Intelligence is about extending future options.. means it will want to utilize all the resources starting from earth... and we will become the unwanted ants in its kitchen all of a sudden..
@phillaysheo8