人工智能是一个威胁对人类吗?

Maybe it's not the Terminator - but some of AI's problems are very real...
08 October 2019

ARTIFICIAL_INTELLIGENCE

A robotic-looking woman's face behind a wall of computer code.

分享

Question

人工智能是一个威胁对人类吗?

Answer

Mariana was concerned enough to ask this question, so AI expert Beth Singler helped to break it down...

Beth - Okay, there's three ways broadly in which A.I. could be a threat to humanity. And personally I think that they sort of run on a spectrum from more to less likely. So let's start with the really, like less likely, in my opinion, version of how A.I. could be a threat to humanity, and that's the classic robo apocalypse, where if you've watched Terminator films or science fiction, where A.I. gains consciousness in some way, seeks to survive and decides that humanity is the greatest threat and it should wipe us out, usually using nuclear weapons or Arnold Schwarzenegger.

This I find a little unconvincing. I am a huge science fiction fan and I do enjoy those sort of apocalyptic scenarios, and I'd like to think I would survive more than a day in a post apocalyptic wasteland, but it's probably unlikely I'm not very fast at running, and I don't have many skills, but that's in my mind as I say is probably the least likely scenario. But that's one that people are concerned about. And so my work again, talking about anxiety, at looking at people's comments online about how they're anxious about artificial intelligence. And I think unfortunately that sort of scenario being unlikely is a bit of a distraction from some of the scenarios that are more likely.

So moving along the spectrum of likelihood, the second most likely one is not so much a case of hugely intelligent, conscious A.I. that destroys us all, but not so smart artificial Intelligence employed in ways in which we cannot predict how it's going to behave in response to commands we give it. So people like Nick Bostrom worry about things like paperclip maximisers, if you set out a really super powerful, capable artificial intelligence to make paper clips, but it didn't have the common sense of most humans who say well, you maybe you want two or three paper clips, maybe it'll turn all of the universe and everything in it into paper clips.

Now again I think that's a slightly unlikely scenario. It is more of a thought experiment, but we could have unintentional consequences of basically, stupid artificial intelligence that doesn't really have the kind of common sense and, sort of, social context that we have as human beings. So that, I'd put that as like the middle scenario.

And then what I think is the most likely scenario is even more stupidity, but human stupidity using artificial intelligence in ways that will be detrimental to human existence, and we already see this as algorithmic bias, where systems that we're implementing and trusting rather more than we should, use data that is already biased by our own human biases, and has repercussions for people's livelihoods and existences. So the examples of this at the moment; parole systems in America using databases of previous convictions and recidivism to decide who should be given parole and who shouldn't. And the data is very clear. If you're a person from an ethnic minority, the A.I. will decide you're more likely to commit a crime again, even if your existing crimes are lesser than someone who's Caucasian white. So we are instilling into our A.I. systems our own human biases and these will have effects on people's lives.

Adam - So it's the same old story it's always been, we're gonna stupid ourselves out of existence.

Beth - Basically yes. Yeah I mean I caveat all of this with my biggest concern is not robo-apocalypse, it's climate change, but you know this is something in our near-term future, we will see impacts of people trusting machines to make decisions that humans perhaps should be making.

Adam - And overall how likely do you think these scenarios are?

Beth - Oh, well the algorithmic bias already exists, that's here, that's now, so hundred percent likely. Paperclip maximiser, A.I. being told to do something it doesn't completely understand? Yeah that's that's reasonably likely, especially if we allow A.I. to be in charge of weapons systems in ways that people are talking about doing now, there could be accidents in that way. And the kind of robot apocalypse, uprising of conscious machines? I'm not sure about that one, that's the one I'm most agnostic about, because I think if something develops super intelligence in the way that people talk about, it's more likely to be not that bothered with humans and just go off to explore the universe which is far more interesting than us little ants anyway.

Adam - So less fun action movie more horrifying bureaucracy?

Beth - Yeah.

Adam - Sam?

Sam - So I think when you mention things like the paperclip maximiser, people think about this as being a physical manifestation of it turning the whole world into paper clips, but I wonder if you have any thoughts on what could happen if such an A.I. system was to be set loose, say in the financial markets or a situation like that?

Beth - Yes.

Sam - The entire global finance system collapsing would probably result in something approximating an apocalypse.

Beth - Yes. The paperclip maximiser is a thought experiment. Obviously it's a little more dramatic to get people's attention to think about the consequences. But basically what it comes down to is what we call value alignment. We want to make sure any artificial intelligence system aligns with our values. Now, you get into a whole complex conversation about what those values are, and who gets to decide. But at the very least we want to make sure that humans aren't impacted detrimentally. If you roll out A.I., that has actually already happened in financial systems, what are the values that being maximised for. We've had crashes specifically because algorithmic decisions have been made based on a set of values that don't maximise for humanity, they maximise for making financial decisions. So absolutely we're already in a stage where technology like this is being used and we have to decide what we want that technology to do before it's being used, but it moves very very quickly.

Comments

Add a comment