Futurism

What Would Life Under AI Rule Really Look Like?

Power can take different shapes. It is very unlikely that a logically thinking entity would resort to practicing power in the way humans do. But it would still practice power in a way it best influences humans.

While a totalitarian dictatorships do make humans do their bidding – it is costly. It takes weapons, manpower, and it has to be constantly maintained and demonstrated to quell silently boiling resistance from an uprising.

When it comes to making people do things, an invisible nudge regime would be the most cost-effective way of wielding power – not the flashy, ego-filled way humans today do it. Nudging comes at the lowest cost, and yields the most commitment from its targets.

A machine would very likely conclude that a totalitarian ‘nudge’ regime would be best.

Robots, robots, robots 1978

Robots, robots, robots from 1978 Image: archive.org / danismm.tumblr.com

A hypothetical machine rule may  not be limited to ancient power techniques used by human leaders – except the ones that were born out of the observation of the ruled.

We have established that AI in its current form cannot obtain the will to power in its human form – unless we start merging machines with humans, in which case the dumbest, unthinking and unreflecting human stupidity will get empowered by machine computing power. But let us, for argument’s sake, assume for a moment that an AI (a machine, not a human-smartphone hybrid) would conclude that it needs power – as well as obtain the will to power. It still doesn’t mean that an AI overlord would practice power the same way a human does. In fact, it is almost certain it wouldn’t. If we remember the definition of power, it says that…

…power is making people do your bidding – as well as making them want to do it.

But I go further. Influencing people is perfectly possible without showing them that I am the one doing the influencing. Power can be wielded invisibly. In fact, it makes the subjects 1) more committed and 2) less resistant. They may not even perceive my power as power.

They become more committed because letting people believe they did my bidding on their own volition makes them not only do it – but defend it as their own idea. It costs me less than forcing or openly persuading them to do so.

We even have a name for it – now that academia caught up with commercial applications – we call it nudging. We have already collectively learned that ‘nudging’ is as good as influencing as overt force – if not better because it takes less effort and thus consumes less resources than open, in-your-face exertion of power, or influencing.

But let us go even further. It is possible to make people do my bidding while they believe that they are in charge. Even when they think they are crossing me. A bit of reverse psychology.

Or let’s go even further and take egos out of the influencing game altogether. Can I make people do what I want more effectively and efficiently if I put my ego and dignity aside? If I let them believe they are my bosses? After all, women all over the world have learned to swallow their egos and let the men do as they please – so that they can influence one of the men, once in a while, while the man thinks it was his idea. It is no way to have dignity or build a reputation, but hypothetically, you can also achieve your goal by lowering yourself and letting the influencee believe you are his bitch.

It is damaging for a human in the long run, it makes you bitter and vindictive if you are always treated as trash – even if you get what you want. Humans also have status needs and want to be treated well – a machine wouldn’t suffer from this ego-problem. The only problem with this last kind of assertion of one’s will is that it brings the costs of influencing too high for too small gains that are not secure enough to build a strategy on it. Maybe it is only for survival in cases. Maybe it’s really just for women to get a few pieces of jewelry now and then. Not for scale.

So maybe a machine would not obtain the will to power in its unreflective, human form. But it may still practice other forms of power. The term ‘soft power’ is taken, so I need to use a different one. How about ‘nudge’? Nothing would stop a machine from studying human behaviour, maybe even experiment on a large number of humans and conclude that they can be swayed into any direction when the right memes are fed to them.

Machine rule would most likely look like a ‘totalitarian nudge’ regime

While a totalitarian dictatorships do make humans do their bidding – it is costly. It takes weapons, manpower, and it has to be constantly maintained and demonstrated to quell silently boiling resistance from an uprising.

When it comes to making people do things, an invisible nudge regime would be the most cost-effective way of wielding power – not the flashy, ego-filled way humans today do it. Nudging comes at the lowest cost, and yields the most commitment from its targets.

A machine would very likely conclude that a totalitarian ‘nudge’ regime would be best. That way gullible humans can parade around like their own men – yet keep doing the machine’s bidding while also loudly endorsing their choices. They could, in theory, choose differently, but the social and other costs would be too high. In the end, nothing would stop a machine from building up an actual totalitarian regime, it could even make people demand it, by taking away those unpleasing minor choices altogether.

But that’s how human autocracies always end up, too.

Does power have to be constantly maintained? 

We assume that humans would grab power whenever they could. As shown above, it might not be the logical thing to do. If one has a goal and knows it, it might not be necessary to boss people around for it. Only when one doesn’t have a goal – apart from power itself – does power become a goal for its own sake.

So maybe we just project irrational, power-grabbing behavior on the hypothetical machine. For humans, power becomes a goal in itself. But why would it be the same for a machine?

Provided that a machine can only really do something to achieve a goal (doesn’t matter who set it), a machine would also know when that goal is reached. And thus no more costly influencing is necessary. The assumption that a machine would keep piling on the power just for the sake of it does not stand. Unless, of course, the will to power comes from the humans behind it.

A machine could learn that humans are prone to regress into dependence bonding

And it could use it to herd us into dependence bonding.

Humans are hopeless at seeing through their own mythologies. Today they keep talking as if politicians still needed to be loved (rather than feared) to get the vote. Even though many can do nasty things to people, scare them, threatened them, render them powerless and feel helpless – and it would get him the votes more surely than any attempt to appeal to voters. It is more logical. We all pretend like we would totally resist a dictator, but we all harbor the instinctive survival strategy to bond with the things we depend on, after all, and it can be triggered when we perceive that the circumstances require it. Some sink into this mindset more easily than others.

If it were well-known that humans resist or die trying, attempts at their enslavement would have ended a long time ago. But it is not the case. The human race is known to be prone to enslavement – mostly by one another. There is a way to cajole, pamper or terrorize individuals and populations to reduce them into obedience and submission – and they will even supply its justification and enforce it on each other ferociously.

Of course, not all of the humans submit, and not all the time. Certain individuals submit even where there is absolutely no need or logic to it. Others don’t submit even when it costs their lives. Circumstances of enslavement are also always slightly different. But humans can be made to regress into dependence bonding. It is voluntary authoritarian submission. And a machine might as well use it.

If allowed to follow logic autonomously (and not strong-armed into human conclusions and pursue human power goals), an AI would recognize the pattern of human submission – and use it. All the submissive types need to trigger their submission is a power that is (perceived to be) overwhelming, dependence to seem absolute, and the only successful strategy of survival to be bonding with the powerful.

Whether the machine has its own goal or just needs power to execute its human masters’ goals, an AI could conclude that submission and enthusiastic love by humans comes handy. And thus machine power would most likely look like a massive, totalitarian nudge machine, with complete surveillance (in your own interest), total oversight (for safety), spreading anxiety from a threat (to counter terrorism, of course) and offering choices from which only one is realistic to choose.

But you chose it, so…

Follow us on Facebook, Twitter @_MwBp

Mwbp Bitcoin

1CXq3Bddt8WphouL91GTAFBXcvbsh5T49D

* Disclaimer: This post is not about various ways AI and machine learning can influence the world and the workplace. This is not a Luddite rant or an anti-AI manifesto. This post is about the appallingly authoritarian reactions of unreflexive human minds to the idea of Singularity – aka. the rise of an AI overlord, aka. an artificial general intelligence capable to assume the will to power. You don’t need to read code or be a Luddite to have a strong opinion about these people.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.