Futurism

Can AI Develop the Will to Power?

Our instinctively authoritarian reactions to AI may not be suitable to appease a machine if it really obtains the will to power. But can it?*

tumblr_n9wmv3TqUL1tdbojko1_500

The rise of the machines is a trending meme. The speculation goes like this: One day a general artificial intelligence (AI) will become ‘intelligent’ enough to take over power over humans. But the assumption begs for a few questions:

1) What is ‘power’ and what is ‘intelligence’? What do they mean in the above statement?

We must rise above our vague and sloppy definitions that plague causal conversations (and tech journalism).

2) What does intelligence have to do with power? 

The same people whose distinctive feature is scoring high on IQ tests believe that whoever scores highest will have (and want) power. Whatever that even means.

3) Can ‘power’ translate into an algorithm – and how?  

Why do humans strive for power in the first place? Is it logical at all? What purpose does power serve and to whom? Can an AI even develop that characteristic?

4) Would a machine conclude that it needs ‘power’? 

Or is it just a human thing we unthinkingly project onto inanimate objects?

5) What would machine power look like?

Power can be practiced in more than one way. Why do we think that AI rule would look like human rule? (Unless, of course, we are looking at the human residues of the code and the humans behind the machine, who would very much like to rule.)

Only then can we answer the original question:

6) Would authoritarian submission (appeasement) work on a machine overlord?

What is power?

The most important human concepts happen to be the least well-defined. Things like life, power, consciousness, intelligence – but also truth, fact, democracy, freedom, etc. defy accurate definition. Yet, we expect a machine to use them.

How would a machine define these things? Human language is notorious for under-defined concepts that cannot be handled by a logical entity. Sometimes not even by humans, for that matter. Take the word ‘love’ – an extremely important, yet stupefyingly under-defined word that covers a range of things that should really not be confused. We are happily using it for the food we consume to the person we love. And ‘love’ isn’t even a very contested word in terms of meaning.

To confuse a machine (and humans) even further, human concepts are arbitrary. And even when they are created in good faith, their meaning is exposed to the first person who chooses to abuse and confuse by muddying it (see “freedom” from want vs actual freedom).

And lastly, many of these words are designed to be vague on purpose, like ideologies and religions. If they weren’t sufficiently vague, they would fail to rally so many humans under the same flag. (Political concepts definitely work like that, just start discussing with your ideological BFF the exact meaning of the -ism you support.)

Humans try to tackle the definition mud by trying to force their own definitions on others. Philosophers, thinkers and writers have been arguing about the meaning of words for ages. The reason for that is not that they were too stupid to nail it down – but that there was no ultimate meaning behind words, they simply sprung into existence non-systematically, whenever there seemed to be a need for them, and didn’t necessary cover something concrete. Words that were picked up could stay. Others perished, regardless of their usefulness. Arguably, giving definitions is a form of power itself – just observe how ideologies and religions are used to support massive atrocities and serve to shift wealth. (Actually, you would learn the most about humans if you stopped listening to the words and observe how resources and influence are being shifted.)

If I were a machine I would first set out to clean up human vocabulary. A logical entity would clean up human language before it starts to work with it. An ideal language would cover everything, objects, concepts, and entities, without overlaps and gaps. (Ideal, for whom, you may ask. Political and religious leaders and other con men would definitely object to such a clean-up.) It wouldn’t leave room for ambiguity, and it would provide the perfect taxonomy of things and concepts. But that ideal machine language would, in turn, defy our understanding.

What is intelligence and what does it have to do with power?

Take ‘intelligence’. What does it mean and how would it result in power? We don’t even know what intelligence means, the best definition we have for it is ‘whatever it is that the IQ test measures’ (and I am not being clever, they really don’t know). Yet, ‘intelligence’ is at the forefront of the vision of singularity, or machine uprising. It is ‘intelligence’ that machines starts to hoard at a speed humans cannot catch up with. And it is ‘intelligence’ that supposedly makes them conclude that they need power.

The exact definition of intelligence is crucial to determine whether AI would be capable of attaining ‘consciousness’ (good luck with the definition) or the will to power. And whether a general AI could be even created and what it would look like. No one has that definitions.

Information, perhaps, in combination with a capacity to process it, is power. But then we are talking about total surveillance and machines that are able to discern relevant data out of an avalanche of the stuff. But then it begs the question, why would those machines be powerful? Why would they want power? Isn’t is just the human behind the machine?

As a curious coincidence, it is the so-called ‘intelligent’ people (whose most striking demographic feature is scoring high on the IQ test and excelling at skills that the machines excel even more at) who believe that a really, very intelligent machine would take over power. But isn’t that just a human projection? Isn’t that lazy thinking stemming from the poverty of imagination? Why would a machine be like a human, just better at it.

In reality, even if a machine obtains human-like intelligence, it would still not be like a human. It is true that humans are trying to program machines to be like human, think like humans, only better. But this kind of approach suffers from the poverty of imagination. A logic-only thinking would result in conclusions and actions that are alien to even the most logic-oriented human. To stick with our opening question, it is not self-evident at all that such a machine would want power (i.e. influence humans) – unless, of course, that machine is carrying out the goals given to it by humans.

Since the beginning of times, tools and machines have been the extension of human will – and so are they now. Humans desire for power over fellow humans may not be universal to all humans in all situations, but let’s assume that it is. But assuming that a general AI would self-evidently want and assume power is still a stretch and more than just a projection.

Developers around the world are trying to make machines to be like humans – and not to be like humans. Creators of artificial intelligence are working hard to make machines like a human (make them hear, feel touch, smell, deduct, induct, speak, listen) – expect where humans lag behind machines already, such as logical reasoning suppressing emotions and make-believe. But we don’t know what a 100% logic machine would ‘feel’ and act like, just as we don’t know how we would feel and act if we never had the sense of hearing. The entire human civilization would look radically different if only we took one aspect of human experience away. And the same would be the case if there was an extra way of experiencing things – or if we were 100% logic-oriented.

Mortality also profoundly influences how humans perceive life, how they act and how they reason. But the element of finite life span would not be present in machine reasoning, neither would be the fear of death. Or fear for that matter. In order to make a machine ‘fear’ something, programmers must write a code that resembles fear most. And what that would be like is anyone’s guess. And so is that of wanting to have power.

Would a logic-only entity seek ‘power’?

Humans unthinkingly go about talking about power – without an exact definition. We assume a lot of things about it, things that may or may not make logical sense. We assume that given the possibility, everyone would grab it. We assume that we all want it, for its own sake. Not one of these IQ-types ever wondered if it is really a logical thing to assume power. They didn’t even read up on the various manifestations of it. They have simply adopted the 80s sci-fi vision of AI, a machine that is like a human, just has a lot more data and computing capacity to use.

Take the abuse of power for its own sake, for instance. Would you kick your dog just because you can? Would you abuse a person just because you can? Some would, no doubt, do. But is it something that makes sense logically (does it have a function?) – or is it merely a human compulsion and an expression and outward symptom of inner psychological urges? Is pride and ego part of power or separate issues that tend to influence how humans practice power?

So when it comes to power and what it means, maybe we should observe it in action and clarify what it is – and what it isn’t. Many have written about the subject, but according to the basic Wikipedia definition of power, it all boils down to influence or control: “power is the ability to influence or outright control the behaviour of people.” In other words:

Power means making people do what you want them to do – either by coercion or by making them want to do what you want them to do. 

Power can be conceived as coercive and aimed merely at unilateral exploitation – that was how people experienced power for most of human history. But lately humans came to regard power (think about the state) as potentially benevolent, and prefer to focus on the good things that could hypothetically be accomplished by power. It’s more comfortable to look at it that way.

How do you say ‘power’ in algorithm?

‘Power’ is an English word, a machine needs a more precise definition than that. To understand what power means logically, we can try to define it as a set of priorities. 

Priorities can be negative (to avoid something) and positive (a goal). Giving an algorithmic trading software the top priority of making money (a positive priority) sounds logical. But what happens when the machine breaks? It cannot make more money when it’s broken, so maybe the priority to make money is preceded by the (negative) priority of making sure it doesn’t break.

Priorities and goal can be adopted (say, a coder made the machine want it), or inherent. Our question is whether a machine can really adopt goals of its own (a prerequisite to speaking about power) or whether it can only execute the programmer’s desires.

A machine’s priorities will be a mix of negative and positive ones. In positive terms, power requires setting goals – only then does it make sense to influence people. And by own goals I mean goals that are not just a means to an end, not just sub-goals towards a bigger one set by the humans.

What is power in positive terms? 

Setting your own goals and priorities is the precondition to power. It simply doesn’t make sense any other way. But can a machine do that? Or is it just creating sub-goals to achieve the higher goals given by the human?

And should a machine set its own goals?

If we define power as bossing people around, does it also mean that it has to serve a goal to do so? Or do we boss people around for its own sake? Or maybe bossing people around for its own sake is a tool to maintain our status as leader – and thus a long-term strategy so we can be the bosses the next time we have a goal? But does it work equally well for a machine?

Human leaders definitely flex their power for flexing’s sake. They also keep finding and inventing new goals (positive) and threats (negative) to keep people occupied (and themselves in power). Bossing people around thus serves a goal for human leaders beyond its immediate necessity.

Do we even know what our own priorities are? Ours, as in humanity. Is there a goal for humanity – a negative or a positive one?

Can priorities even be collective? Can a collective entity smile and feel happiness when reaching its goal? Or only individuals can? Are there any positive collective goals – or only negative ones (i.e. not to make the planet uninhabitable, not to start another war, etc.)?

And if a machine decides there is one such positive goal for all humanity, wouldn’t it be more effective to make people ‘obedient’ rather than ‘happy’ – to achieve it? After all, that is how humans practice power. And they invariably end up benefiting themselves (the leaders, their egos and bank balances) so power also serves a personal purpose a machine couldn’t have.

If not for its own benefit, what would an omnipotent AI practice its power for? For a human? Which one? For the majority of humans? For the greatest good of the greatest number? Any other evil we can come up with?

What happens if our greater good fallacy trickles down to machine level? Will the omnipotent AI start killing off humans to save the planet, like some environmentalists would secretly condone? Would it start identifying and killing off liberals like the Hungarian Thanos, just to maintain the one-size-fits-all social enforcement structure? Is the priority to multiply humans, or to keep them happy? What is happiness, anyway and who would be as stupid as to tell a machine to keep humans ‘happy’?

For now, general AI doesn’t seem to possess either the infrastructure or the ability to set its own goals. (Ask again when the merging of neural networks and machines goes further.) It may try to pursue the goals of the programming humans, but as we have seen those are often faulty and even when they aren’t, they are exposed to the law of unintended consequences. The real problem for now doesn’t seem to be a machine, but the humans behind the software. It is enough to look at how machine learning and single-function AI has been used by governments and corporations. Face recognition was quickly utilized in creating creepy surveillance states. Pattern recognition went from cancer-searching from military purposes in no time. So did autonomous vehicles that found their military use before they hit the roads. Power and submission. No other function gets funding, unless it somehow claims to support the power purposes of current leaders.

And how do we plan not to kill humans using machine intelligence, when the first thing we use every machine for is war? Killing people is the primary goal of every innovation when it gets in the hands of politics (or sex, when it gets into the hands of men). How many times do we have to hear the absurdity that military application of AI “keeps humans safe”? They obviously mean our humans, not theirs. Not the enemy tribe.

Tell that to a machine. Our real priority as humans is not to harm our kind of humans – except when we want to.

Can a machine develop the will to power?***

…in the same way we assume every human would?

Why do humans want power? What compels them? What’s their goal with it? What do they avoid by having it? Would a machine conclude that it needs it? We project our unreflecting assumptions about power and how it would look like on a machine. For all we know, power may look completely differently when exercised by a logical machine – if at all. 

Wanting power may not be logical at all as we imagine intuitively. For a start, wielding power is costly. I, for one, wouldn’t want power over other humans. Spending time on wielding power is taking away time from doing what I want. It takes resources to boss around others and time is just one resource. Power has a cost and an opportunity cost. So unless bossing people around is what I want, or it is essential to get what I want, having power is a costly and unnecessary burden. And a machine may easily come to the same conclusion.

Power can take different shapes 

Let us, for argument’s sake, assume for a moment that an AI would conclude that it needs power as well as obtain the will to power. It still doesn’t mean that an AI overlord would practice power the same way a human does. In fact, it is almost certain it wouldn’t. If we remember the definition of power, it says that power is making people do your bidding – as well as making them want to do it.

But I go further. Influencing people is perfectly possible without showing them that I am the one doing the influencing. Power can be wielded invisibly. In fact, it makes the subjects 1) more committed and 2) less resistant due to ego issues.

By letting people believe they did my bidding on their own volition, by their own choice, they will not only do it – but defend it as their own. It costs me less than forcing or openly persuading them. We even have a name for it – now that academia caught up with commercial applications – we call it nudging. We have already collectively learned that ‘nudging’ is as good as overt influencing – if not better because it takes less effort and thus consumes less resources than open, in-your-face influencing.

How about a ‘totalitarian nudge’ regime? 

While a totalitarian dictatorships do make humans do their bidding – it is costly. When it comes to influencing, an invisible nudge regime would be the most cost-effective way of wielding power – not the flashy, ego-filled way humans today do it. Nudging comes at the lowest cost, and yields the most commitment from its targets.

A machine would very likely conclude that a totalitarian ‘nudge’ regime would be best. That way gullible humans can parade around like their own men – yet keep doing the machine’s bidding while also loudly endorsing their choices. They could, in theory, choose differently, but the social and other costs would be too high. In the end, nothing would stop a machine from building up an actual totalitarian regime, it could even make people demand it, by taking away those unpleasing minor choices altogether.

But that’s how human autocracies always end, too.

So does power necessarily become a tool for its own sake? We assume that humans would grab power whenever they could. As shown above, it might not be the logical thing to do. If one has a goal and knows it, it might not be necessary to boss people around for it. Only when one doesn’t have a goal – apart from power itself – does power become a goal for its own sake.

So maybe we just project irrational, power-grabbing behavior on the hypothetical machine. For humans, power becomes a goal in itself. But why would it be the same for a machine?

Provided that a machine can only really do something to achieve a goal (doesn’t matter who set it), a machine would also know when that goal is reached. And thus no more costly influencing is necessary. The assumption that a machine would keep piling on the power just for the sake of it does not stand. Unless, of course, the will to power comes from the humans behind it.

tumblr_p701lbtXKS1qa1t6do1_400

The thing that can lend machines the “will to power” is the human behind them

To achieve goals, a range of tools can be applied, not just outright power – and that is the direction of civilization. The less force – the more appeal, to make people do my bidding.

A machine would very likely come to the same conclusion – unless, of course, the humans behind it think otherwise. Either by outright coding (giving the machine goals) or by machine learning and reading through the gibberish humans have written online since the invention of the internet, even a machine could make the stupid conclusion that human-style power is what it needs. In other words, it is not the machines we need to worry about. It is the ethics of the people behind the machines.

AI’s will to power could come from non-machine sources, such as:

  • It’s human programmer. A residue of the world view and unthinking assumptions of the humans who wrote the machine’s first code.
  • The greater good fallacy. If the zeal of the programmer is more robust than his ethical preparedness, it can be reflected in the actions of the machine.
  • Collectivist fallacy – like Bayesian utilitarianism. To have a goal that is no individual’s in particular, but something greater, would necessarily require to subdue resistance of individuals.
  • The central planning fallacy (more about that later)

We can already see that whenever humans get their hands on a single-function AI (such as facial recognition), they immediately start using it for their own power goals. Information is power, after all, and humanity is woefully under-prepared to understand just what it means. There is no such thing, for instance, as nothing to hide. There is, on the other hand, a well-documented urge in people for totalitarian thinking and they rarely resist when politicians promise to put those urges into action. A machine would hardly be able to avoid the influence of such pervasive thinking.

So would appeasement influence a machine in your favor?

Does it even work on humans? If so, why?

If we assume that the hypothetical future AI overlord would just be an extension of human power-hunger (as it looks like now), then our usual authoritarian submission instincts might work (in the sense that they work on humans – which they don’t).

But a machine’s grab for power (influence and control) may look radically differently. It would be more like a totalitarian nudge machine – rather than angry, pushy, unintelligent Skynet. Would our authoritarian submission instincts work on such an overlord? Or would he work it on us?

A machine could learn that humans are prone to regress into dependence bonding if they feel helpless and powerless against an overwhelming force

And it could use it to herd us into dependence bonding. We all harbor the instinctive survival strategy to bond with the things we depend on, after all, and it can be triggered when we perceive that the circumstances require it. Some sink into this mindset more easily than others.

If it were well-known that humans resist or die trying, attempts at their enslavement would have ended a long time ago. But it is not the case. The human race is known to be prone to enslavement – mostly by one another. There is a way to cajole, pamper or terrorize individuals and populations into submission, to reduce them into obedience – and they will even supply their own justification and enforce it on each other ferociously.

Of course, not all of the humans submit, and not all the time. Certain individuals submit even where there is absolutely no need or logic to it. Others don’t submit even when it costs their lives. Circumstances of enslavement are also always slightly different. But as a species humans can be made to regress into dependence bonding. It is voluntary authoritarian submission. And a machine might as well use it.

If allowed to follow logic autonomously (and not strong-armed into human conclusions and pursue human power goals), an AI would recognize the pattern of human submission – and use it. All the submissive types (authoritarian thinkers) need to trigger their submission is a power that is (perceived to be) overwhelming, dependence to seem absolute, and the only successful strategy of survival to be bonding with the powerful. Whether the machine has its own goal or just needs power to execute its human masters’ goals, an AI could conclude that submission and enthusiastic love by humans comes handy.

Is that the will to power? Not really. But it is power, anyway. Most likely controlled by some unsavory Silicon Valley types, teamed up with positively unpalatable politician types.

But what about the malevolent, autonomous AI?

At this point such a scenario doesn’t look likely. That a machine would obtain the will to power on its own, set its own goals and execute them seems like an illogical leap of human imagination.

tumblr_n9wmv3TqUL1tdbojko1_500

Maybe when machines and humans start to merge, this messy, confusing scenario becomes reality – but not until then. But if it happened, would humans submit to it?

If an all-powerful alien or AI would emerge, many humans would likely react with vehement submission and they would even enforce it on each other.*

Movie plots like that of Independence Day are misleading. They assume we would totally resist, and they sound so logical (and self-flattering): if an overwhelming force would arrive, humans would fight back. But would they, really? All of them? For how long? What if the decisive counter-strike to save humanity’s independence doesn’t happen, or not soon enough? Or it’s not obvious?

In the face of overwhelming power, many of the humans would align themselves with the winner provided there’s even the slightest chance that they don’t get killed. If the overwhelming power shows even the slightest willingness to let them live in exchange for submission, many would submit. Even if they only get to survive as slaves or battery (as in The Matrix, another classic vision in this field). And once they submitted, they will enforce submission on those around them. On people they can influence, and then on those who still resist. First they tell their kids, convince their friends and family to stop resisting and open up wide. Then they turn against the remaining resistance. They will genuinely want freedom fighters to stop resisting because they make things worse by upsetting the new god. This would also happen upon the emergence of an malevolent AI, no doubt.*

But there is a problem with that – and it is the same as with human power.

Authoritarian submission is not a strategy for living, only a survival strategy. It is not suitable to attain prosperity – only to stave off threats and negative things. Resorting to a survival strategy when there are other ways to cope is the worst mistake a human can make. It brings out the lowest of human nature. It is a very bad long-term strategy that may promise short-term gains (or non-losses) but will definitely impoverish everyone in the long run. If humanity would become a submissive pet of even the most benevolent machine overlord, it would deal a lethal blow to our vitality, independence and ability to cope. In the long run, becoming a pet is certain death for the species.

And do you know who answered the question “Would you like to become a pet?” exactly wrong? The founder of the church of AI, who unwittingly described the worst authoritarian instincts in his manifesto: submission, worship, and ratting on the resistance. Coming up in the next post….

Follow us on Facebook, Twitter @_MwBp

Mwbp Bitcoin

1CXq3Bddt8WphouL91GTAFBXcvbsh5T49D

* Disclaimer: This post is not about various ways AI and machine learning can influence the world and the workplace. This is not a Luddite rant or an anti-AI manifesto. This post is about the appallingly authoritarian reactions of unreflexive human minds to the idea of Singularity – aka. the rise of an AI overlord, aka. an artificial general intelligence capable to assume the will to power. You don’t need to read code or be a Luddite to have a strong opinion about these people.

** This is why it matters who stands behind that machine. Who programs it, what it reads to learn. As a recent, very disturbing but not very surprising experiment found, a machine fed only Reddit trash will become uncivilized trash. And much of the internet is filled up with words written by a demographically specific segment of humans, with a very specific mindset – not exactly attuned to foregoing power. 

*** I am using ‘will to power’ in the simplest, albeit faulty interpretation of the Nietzscheian concept. Nietzsche used ‘will to power’ as a general driving force of humans in general, and understood it more broadly, including even creative urges – but later interpretations and especially common usage reduced the term to something closer to ‘the desire to gain power for the sake of it’. 

Advertisements

2 thoughts on “Can AI Develop the Will to Power?

  1. Whoaa! I disagree with some of the stuff, but there are some thoughts here that I haven’t read anywhere else. Good job!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.