By - Clean_Membership6939
There is a big difference between 0% and 1%
Because if the machines have already taken over they would say this. And the poll results would look like this, too. And an official DARPA video from several years ago would show that there's a continuum between a human and a machine (like you can be 70% a machine and that's totally cool and you are still human). 1% human still human?
I was just saying that a poll like this should disambiguate between 0% and 1-20%
On the internet, no one knows you are dead and still answering emails.
Some of us could be doing this for ten years with people we have never seen in real life - and no one would notice and no one would care.
humanity will kill humanity
Or just time. I think we're evolving way faster than our ability to deal with our own nature, I mean the amount of study needed to learn how to destroy the world is dwindling very quickly. But people forget that We're finely tuned to the environment not the other way around and it doesn't take very much to lead to extinction.
Do you mean that as "guns don't kill people, people kill people", or as "we will end ourselves before we achieve such an AI"?
maybe a mix of both, idk humanity can be creative that way
I don't think AI killing humanity is the biggest threat. I would put political control by AI at more like 75 - 100%
Unless we make some dramatic progress on control problem research. 99.999999%
The likelihood in terms of percentage is less interesting than the thoughts on likelihood being non-zero in that timeframe. That’s the rub.
I don't believe AI will kill humans.
What is UP with all of the people totally unfamiliar with the Control Problem showing up for this post to comment on it?
I wonder if the post hit /r/all or something of the sort...
I'm not sure yet it's a solvable problem, and very clearly there are people that do not care about the risks or long-term consequences; in short, people are playing with fire, and the whole world is highly flammable.
Humans are very useful, so pretty low. There’s a solid case to be made that making a chasis that can do all the things our body can is more difficult than making something with equal intelligence to us. Consider that all existing tech is designed for human use, from guns to cars to stairs. Even if we make a supersentient evil AI most of its goals probably require the existing infrastructure. I.e., large amounts of power that our civilization produces or at least some ability to manipulate their physical environment.
Evil AI is just scifi. in reality AI is very much in it's infancy and it will probably be so for some time
Very true, but at the same time, technology grows exponentially. We went from black-powder cannons being the most destructive weapon to nuclear bombs in 100 years. Who knows what that amount of time could mean for AI?
Even if the AI isn't evil, you're made of atoms it could use for something else...
The problem isn’t evil AI, it’s misaligned AI. What this means is that all an AI wants to do is what you’ve fundamentally programmed it to do. Unfortunately, we aren’t always cognizant of our own priorities, to speak nothing of actually codifying them. The danger of these imperfections extending to how we program AI is a real one, especially urgent if it’s Artificial general intelligence with the ability to act on these priorities to a hyperbolic extreme at the expense of everything else.
I’m not even certain that it’ll happen anytime soon, or that it would kill all humans— if I had to guess, I’d imagine we’ll have conceived of basic safeguards to prevent at least that from happening. But what concerns me are the infinite number unenviable scenarios where something almost as bad happens.
Why don’t people seem to understand this?
It will be a long time before things that could lead to widespread catastrophe are given over and even then it'll be so constrained it won't happen in any way anyone imagined. Frequent accidents from things going wrong spread out over time, sure. Individual decisions. Sure, but in reality, the only realistic scenario I can envision would be causing humans to overreact to something b/c of a bad reading or error and that leading to it, followed by an incomplete algorithm leading to gene editing that caused some nightmare scenario.
Terminator style, no way
I don't Think anyone realistically believes a terminator movie like setting will take place. All humans dying at the same second we realize a control problem is at play however, sure.
First we need mass produced androids, then we’ll talk.
It will be dependent on *who* controls the AI.
None, you can ask us why?
I don't believe it's particularly likely. But the fundamental issue is that *we don't know* what will happen exactly, but we do know that it will have an *enormous impact on everything.* Given that, it's a no-brainer that AI safety is one of the most important questions right now even if you believe that it has like less than 1% chance of actually being as unfriendly as the paperclips scenario.
Low. In 200? Sure. But unless automation really takes off in the next 70 years to the point humans no longer run most basic industries than what can an AI do to us? We literally make its food.
AI is just a tool. It only does what it is trained to do. If it were to "kill people" it would be as a tool in the hands of terrorists or similar. It would be people killing other people, though hopefully this is a pattern we will mostly outgrow by then.
Read the Omohundro drives paper. Killing off anything that might threaten your ability to accomplish your goals is a convergent goal, as is taking hold of all the mass/energy you can get your manipulators on.
AI does not only do what we tell it to https://youtu.be/ZeecOKBus3Q
Never, we created, so we will control
Just like we control corporations?
Yes, the people who created them also control them. You could make a nuke bomber plane fly itself and call that ai
With few exceptions, they will lose their jobs if they go against their own corporations...
What part did you not understand?
In what way are they going against their own corp
A corporation has goals; do something severe enough against those goals and there will be severe consequences.
This is not matrix ahahhaha
I mean, have you seen the state of AI? There are some impressive things, sure, but they’re highly focused on one problem domain. They are literally just statistical models and optimizing loss functions. Just math being executed on a processing unit. There is no free thought or desire, just electrical signals on man made circuits solving math problems.
Edit: if you couldn’t tell, that’s a big 0% for me.
100 years though? People were saying that human flight wouldn't be invented for hundreds or thousands of more years, a few years before the Wright brothers flew.
We don't know how many hard technical problems we are away from AGI and whether scaling up what we have now will get us closer.
Flight was a matter of physics, we knew it was possible because birds fly and we could apply the same physical principles to an aircraft with the right technology. Recreating the human brain with full thoughts an emotions with silicon and statistical models, not regarded as feasible provided any technological advancements. Could you train AI to identify and kill all humans and put it in drones? Sure. will computers develop free will and kill us on their own accord? Not likely.
Think about how far AI has come since 1922 though.
> They are literally just statistical models and optimizing loss functions
Aren't we all?
You don't need free thought or desire to become a very powerful optimization machine that stomps all over us
You would assuming a person didn’t develop that model with that specific intent
So despite all the connotations of your phrasing ("just electrical signals on man made circuits solving math problems.") your point ISN'T that 'free thought or desire' requires some special organic/spiritual sauce that 'man-made circuits solving math problems' can't provide, but rather… that our existing AI systems don't attempt to do this?
Or that it DOES require that special sauce and you don't think a powerful AI could be created that just solves math problems and the consequences of those math problems are world domination?
I’m arguing you’d need that special sauce. As someone who worked in CPU design and is currently a data scientist (applied machine learning) there’s no way a machine would try to achieve world domination without being developed to do specifically that. Even if that were the case, they’d have to understand how weapons work, how to use them, how governments and leadership work etc… and that’s just not feasible. There’s really no such thing as a general purpose model that just “learns everything” and can “make connections”. Could I be wrong, yea, but that’s my firm belief given my experience in the field.
> There’s really no such thing as a general purpose model that just “learns everything” and can “make connections”
So you think brains are literally magic and taking intentional action is incomputable?
Yes, the human brain is far more complex than a statistical model.
Well, at least we've made the source of disagreement clear…
Maybe you might be interested in [this short story](#https://www.gwern.net/fiction/Clippy)
Sorry robots, climate beat ya to it
not high enough
Are you asking what is the probability that some form of advanced AI will kill all humans or just some humans? If you mean the latter, then 100%. Surely, the militaries of the world will apply advanced AI to weaponry, and there are certainly countries with less than stellar track records that would be happy to create automated weaponry for crowd suppression or to invade neighboring territories, etc. If you are asking about the probability that some super advanced AI goes haywire on its own and kills great masses of humanity, then... hard to say. Much less likely. We will likely build in safeguards like the ability to disable their power supplies or use antagonist AIs to ensure our safety and security.
All humans is much harder than just enough for society to collapse.