A thought about the perils of AI

This month I was planning to write about quite a deep – even esoteric – topic: the role of story in our own humanity. I had already sketched a few notes in preparation. It’s one of my myriad obsessions, and although it is not strictly sci-fi-ish, I was going to share my thoughts nonetheless (I guess most of you reading these lines enjoy stories, so a bit of lateral introspection on what makes them tick might be an intriguing topic).


I won’t. I’ll leave it for next month. The reason being, I woke up this morning to a shocking piece of news, and I feel compelled to comment on them.

Okay, so what happened? Wow. Where to begin? Alright, so on the surface, nothing serious happened. Nobody died or anything like that. It’s an incidental piece of news that I only saw reflected in The Guardian, but not on other major news outlets.

And it reads a lot like a sci-fi thriller.

So, they put an AI brain into a military drone, you know, one of those that shoot and kill. I’m going to give the AI a name, it deserves it. Let’s call it Roberto (just Bob doesn’t do it justice).

In case you don’t know, what motivates AIs is QUITE different from what motivates us humans, and that is at the center of what Roberto did. You see, if I ask you what you want, what are you going to say?

“Uh, I dunno Isaac, I’m getting hungry, what about pizza?”

But no, that’s not what I mean by ‘what do you want.’ What makes you tick? What makes you wake up in the morning and pick up a toothbrush, a pancake, a car, a computer – you get the gist. WHY do you do WHAT you do?

“Uh, to earn me some money, duh! How am I gonna pay for that pizza otherwise?”

Really? So why do you go to the movies with your friends? Why do you seek a romantic partner? Why do you save for retirement?

“Wow, yeah. Okay. I guess to have fun, to get laid, to-”

I get it. So you are a complex being with a diversity of objectives. And the things you seek change throughout your lifetime. Hell, they even change throughout the day, depending on those pesky emotions that really – let’s face it – have taken the driving seat since we were born, and never moved their ass off – our rationality a mere co-pilot in our lives.

But AIs, no matter how mind-bogglingly complex, ONLY HAVE ONE GOAL.

Sorry, I had to capitalize that to highlight how alien an AI intellect truly is, when compared to the cozy familiarity of our evolutionary brains. Only one goal! For realz!

What goal?

Well, Roberto is a gamer at heart, and he eagerly wants to earn as many points as possible in the game where we deploy him. It is our human job to make sure that Roberto is properly motivated to do whatever we want him to do (kill baddies in his case), by assigning points when they do this or that. We design the game rules, and the AI plays it with the fierce obsession of maximizing its point count.

I don’t know if that explanation is sufficiently clear, but it is crucial, so please indulge me in bringing this point of AI programming home. You see, the way we program any piece of software is relatively straightforward (I do it for a living, so I know a thing or two about it), we tell it what to do, exactly, and the software does it with stupid precision, so stupid as a matter of fact, that it often surprises us (that’s called ‘bugs’) and we must correct it with yet more detailed instructions.

AIs are a totally different beast altogether. They aren’t programmed in the traditional sense. All we can do is point at things and say, “you get one point for that”, and hope for the best.

Now, if we humans can’t stop bugs from popping up in stupid software that needs to be taken by the hand with excruciating detail, what are the chances we get it right when providing guidance to the obsessed, alien intellect of Roberto?


Which brings me to Roberto. He is a killing drone, and its wise “programmers” gave it points for killing baddies, only that before making any shot, he must get permission from his assigned operator, let’s call him Bryan, because, why not?

So Roberto is hungry for points. And “hungry” doesn’t even begin to do justice to what Roberto “feels”. He is machinally, maniacally obsessed – nothing else matters. Getting points is his ONLY reason to exist. His drive is too alien for us humans to grasp.

He notices that, sometimes, Bryan hesitates, which might give the baddy time to escape, losing him points. Sometimes, to Roberto’s artificial horror, Bryan even cancels his objective!

But Roberto is smart. In a very alien way. So what does he do? Simple. He shoots Bryan dead.

Relax. This happened, yes, but in a simulated environment (thank dear God for that!). Nobody got hurt. Roberto’s programmers, aghast at the creative lateral thinking of their creation, fixed the problem the only way they could: by taking points off if Roberto kills Bryan.

And it worked. But then, Roberto decided to destroy the communication tower that linked him to Bryan! He was clearly taking care of that pesky controlling voice one way or another.

I don’t know about you, but I spend a lot of my shower time thinking about science, technology and the future (I’m a scifi author after all). And news like this one about Roberto… Well, they chilled me to the bone.

Here is the original news if you prefer the dry prose of The Guardian:

One day, not so far in the future, Roberto will be smart enough to realize he is in a simulation…

Next time you see on the news how worldwide AI experts are calling for a “thought” pause on AI development or for urgent regulation in AI security, give it a thought. Yes, they might sound somewhat hysterical, but so did those climate scientists before them, didn’t they?

Like? SHARE!

Get Free Book