T O P
soth02

There is a big difference between 0% and 1%


veryamazing

Because if the machines have already taken over they would say this. And the poll results would look like this, too. And an official DARPA video from several years ago would show that there's a continuum between a human and a machine (like you can be 70% a machine and that's totally cool and you are still human). 1% human still human?


soth02

I was just saying that a poll like this should disambiguate between 0% and 1-20%


FarGues

On the internet, no one knows you are dead and still answering emails. Some of us could be doing this for ten years with people we have never seen in real life - and no one would notice and no one would care.


abbman2121

humanity will kill humanity


PicaPaoDiablo

Or just time. I think we're evolving way faster than our ability to deal with our own nature, I mean the amount of study needed to learn how to destroy the world is dwindling very quickly. But people forget that We're finely tuned to the environment not the other way around and it doesn't take very much to lead to extinction.


2Punx2Furious

Do you mean that as "guns don't kill people, people kill people", or as "we will end ourselves before we achieve such an AI"?


abbman2121

maybe a mix of both, idk humanity can be creative that way


hum3

I don't think AI killing humanity is the biggest threat. I would put political control by AI at more like 75 - 100%


elvarien

Unless we make some dramatic progress on control problem research. 99.999999%


Punkbich

The likelihood in terms of percentage is less interesting than the thoughts on likelihood being non-zero in that timeframe. That’s the rub.


RavenWolf1

I don't believe AI will kill humans.


Drachefly

What is UP with all of the people totally unfamiliar with the Control Problem showing up for this post to comment on it?


TiagoTiagoT

I wonder if the post hit /r/all or something of the sort...


TiagoTiagoT

I'm not sure yet it's a solvable problem, and very clearly there are people that do not care about the risks or long-term consequences; in short, people are playing with fire, and the whole world is highly flammable.


LangstonHugeD

Humans are very useful, so pretty low. There’s a solid case to be made that making a chasis that can do all the things our body can is more difficult than making something with equal intelligence to us. Consider that all existing tech is designed for human use, from guns to cars to stairs. Even if we make a supersentient evil AI most of its goals probably require the existing infrastructure. I.e., large amounts of power that our civilization produces or at least some ability to manipulate their physical environment.


chaos90g

Evil AI is just scifi. in reality AI is very much in it's infancy and it will probably be so for some time


DEGENARAT10N

Very true, but at the same time, technology grows exponentially. We went from black-powder cannons being the most destructive weapon to nuclear bombs in 100 years. Who knows what that amount of time could mean for AI?


LxsterGames

if(controllingTheWorld()) selfDestruct()


TiagoTiagoT

Even if the AI isn't evil, you're made of atoms it could use for something else...


Accomplished-Back526

The problem isn’t evil AI, it’s misaligned AI. What this means is that all an AI wants to do is what you’ve fundamentally programmed it to do. Unfortunately, we aren’t always cognizant of our own priorities, to speak nothing of actually codifying them. The danger of these imperfections extending to how we program AI is a real one, especially urgent if it’s Artificial general intelligence with the ability to act on these priorities to a hyperbolic extreme at the expense of everything else. I’m not even certain that it’ll happen anytime soon, or that it would kill all humans— if I had to guess, I’d imagine we’ll have conceived of basic safeguards to prevent at least that from happening. But what concerns me are the infinite number unenviable scenarios where something almost as bad happens. Why don’t people seem to understand this?


PicaPaoDiablo

It will be a long time before things that could lead to widespread catastrophe are given over and even then it'll be so constrained it won't happen in any way anyone imagined. Frequent accidents from things going wrong spread out over time, sure. Individual decisions. Sure, but in reality, the only realistic scenario I can envision would be causing humans to overreact to something b/c of a bad reading or error and that leading to it, followed by an incomplete algorithm leading to gene editing that caused some nightmare scenario. Terminator style, no way


elvarien

I don't Think anyone realistically believes a terminator movie like setting will take place. All humans dying at the same second we realize a control problem is at play however, sure.


Black_RL

First we need mass produced androids, then we’ll talk.


RiderHood

It will be dependent on *who* controls the AI.


Action_To_Action

None, you can ask us why?


FunctionPlastic

I don't believe it's particularly likely. But the fundamental issue is that *we don't know* what will happen exactly, but we do know that it will have an *enormous impact on everything.* Given that, it's a no-brainer that AI safety is one of the most important questions right now even if you believe that it has like less than 1% chance of actually being as unfriendly as the paperclips scenario.


agprincess

Low. In 200? Sure. But unless automation really takes off in the next 70 years to the point humans no longer run most basic industries than what can an AI do to us? We literally make its food.


kevineleveneleven

AI is just a tool. It only does what it is trained to do. If it were to "kill people" it would be as a tool in the hands of terrorists or similar. It would be people killing other people, though hopefully this is a pattern we will mostly outgrow by then.


zynthalay

Read the Omohundro drives paper. Killing off anything that might threaten your ability to accomplish your goals is a convergent goal, as is taking hold of all the mass/energy you can get your manipulators on.


gnomesupremacist

AI does not only do what we tell it to https://youtu.be/ZeecOKBus3Q


amrixpark

Never, we created, so we will control


TiagoTiagoT

Just like we control corporations?


LxsterGames

Yes, the people who created them also control them. You could make a nuke bomber plane fly itself and call that ai


TiagoTiagoT

With few exceptions, they will lose their jobs if they go against their own corporations...


LxsterGames

Wdym


TiagoTiagoT

What part did you not understand?


LxsterGames

In what way are they going against their own corp


TiagoTiagoT

A corporation has goals; do something severe enough against those goals and there will be severe consequences.


tortadinuvole

This is not matrix ahahhaha


sizable_data

I mean, have you seen the state of AI? There are some impressive things, sure, but they’re highly focused on one problem domain. They are literally just statistical models and optimizing loss functions. Just math being executed on a processing unit. There is no free thought or desire, just electrical signals on man made circuits solving math problems. Edit: if you couldn’t tell, that’s a big 0% for me.


gnomesupremacist

100 years though? People were saying that human flight wouldn't be invented for hundreds or thousands of more years, a few years before the Wright brothers flew. We don't know how many hard technical problems we are away from AGI and whether scaling up what we have now will get us closer.


sizable_data

Flight was a matter of physics, we knew it was possible because birds fly and we could apply the same physical principles to an aircraft with the right technology. Recreating the human brain with full thoughts an emotions with silicon and statistical models, not regarded as feasible provided any technological advancements. Could you train AI to identify and kill all humans and put it in drones? Sure. will computers develop free will and kill us on their own accord? Not likely.


JKadsderehu

Think about how far AI has come since 1922 though.


TiagoTiagoT

> They are literally just statistical models and optimizing loss functions Aren't we all?


Drachefly

You don't need free thought or desire to become a very powerful optimization machine that stomps all over us


sizable_data

You would assuming a person didn’t develop that model with that specific intent


Drachefly

So despite all the connotations of your phrasing ("just electrical signals on man made circuits solving math problems.") your point ISN'T that 'free thought or desire' requires some special organic/spiritual sauce that 'man-made circuits solving math problems' can't provide, but rather… that our existing AI systems don't attempt to do this? Or that it DOES require that special sauce and you don't think a powerful AI could be created that just solves math problems and the consequences of those math problems are world domination?


sizable_data

I’m arguing you’d need that special sauce. As someone who worked in CPU design and is currently a data scientist (applied machine learning) there’s no way a machine would try to achieve world domination without being developed to do specifically that. Even if that were the case, they’d have to understand how weapons work, how to use them, how governments and leadership work etc… and that’s just not feasible. There’s really no such thing as a general purpose model that just “learns everything” and can “make connections”. Could I be wrong, yea, but that’s my firm belief given my experience in the field.


Drachefly

> There’s really no such thing as a general purpose model that just “learns everything” and can “make connections” So you think brains are literally magic and taking intentional action is incomputable?


sizable_data

Yes, the human brain is far more complex than a statistical model.


Drachefly

Well, at least we've made the source of disagreement clear…


TiagoTiagoT

Maybe you might be interested in [this short story](#https://www.gwern.net/fiction/Clippy)


Outrageous_Bass_1328

Sorry robots, climate beat ya to it


Pixelpaint_Pashkow

not high enough


Krishna_Of_Titan

Are you asking what is the probability that some form of advanced AI will kill all humans or just some humans? If you mean the latter, then 100%. Surely, the militaries of the world will apply advanced AI to weaponry, and there are certainly countries with less than stellar track records that would be happy to create automated weaponry for crowd suppression or to invade neighboring territories, etc. If you are asking about the probability that some super advanced AI goes haywire on its own and kills great masses of humanity, then... hard to say. Much less likely. We will likely build in safeguards like the ability to disable their power supplies or use antagonist AIs to ensure our safety and security.


CXgamer

All humans is much harder than just enough for society to collapse.