Futurism

The church of AI unwittingly demonstrated the worst authoritarian thinking failures

Our reactions to the mere idea of an artificial general intelligence emerging tell more about us than the reality of AI.

The founder of the now apparently defunct church of AI (Way Of The Future – WOTF) unwittingly displayed textbook authoritarian reactions a human can give to an overwhelming force: submission, worship, and ratting on the resistance

Robots, robots, robots 1978

Robots, robots, robots from 1978 Image: archive.org / danismm.tumblr.com

A former Google employee made the news a few yeas ago when he established a church to a future AI that will be so powerful that you better worship, or else.

“What is going to be created will effectively be a god”

“It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”

The approach described in the Church of AI’s manifesto is interesting because it was a textbook demonstration of authoritarian submission – applied to a brand new field: artificial intelligence. What we have seen happening to people in the face of powerful politicians and forces of nature, these people described in the face of the mere prospect of a powerful machine emerging in the future. In other words, without even the slightest threat to provoke submission. Submission is preemptive and voluntary, because it comes from a less obvious source of power: the benevolent one.**

Everyone recognizes power when it’s threatening and intimidating. It takes a bit more to recognize it when it comes bearing gifts: the promise of immortality, a workless life, the promise of taking care of you. If an emerging AI is promised to take care of them, some humans would gladly regress into babyhood to accept it. Some would even willingly worship it. Because worshiping makes a provider provide. Or so humans unthinkingly assume. These people are ready to give up their liberties (and yours) to remove obstacles from The Benevolent Machine’s way.

The short but poignant WOTF manifesto describes textbook authoritarian submission based on the promise of positive dependence, focusing on benefits an AI overlord could hypothetically bring.

While doing so, the manifesto unwittingly goes through the list of authoritarian thinking patterns – something social scientists are all too familiar with.

Would You Suck Up to an AI Overlord?

Step 1: Identify something as inevitable.
Step 2: Commence sucking up to it.

“We don’t think that there are ways to actually stop this from happening (nor should we want to) and that this feeling of we must stop this is rooted in 21st century anthropomorphism (similar to humans thinking the sun rotated around the earth in the “not so distant” past).”

There is no way to stop this from happening = We are helpless. Commence dependence bonding with the inevitable. And by “anthropomorphism” the text means an anthropocentric world view. Ironically, what is “anthropomorphism” here is the way the manifesto regards the machine.

Anthropomorphize the powerful

“We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers.”

Despite claiming being firmly rooted in science, this guy didn’t just choose the (legal) form of a church, but even went on to demonstrate the underlying fallacy of every religious and secular oppression: in the face of the (perceived) helplessness, let us bond with the inevitable – and try to influence it in roundabout ways, by begging and worshiping, even if it only takes place in our own minds.

Ever since helpless humans resorted to regard the weather as a person (a god), they have been anthropomorphizing overwhelming forces with enthusiasm. The weather god may be capricious and mean but at least he could be appeased. The god of lightning may be unpredictable and damaging, but at least we can pray and thus influence him. Even a mean and evil overlord is better than having no way to manipulate what happens to us. So we imagine we can.

And the same thing is happening vis-à-vis AI.

Engineers would benefit from reading up on the humanities before they unleash all that manpower and resources on developing things we don’t even have a definition for. Such as ‘human-like’ ‘intelligence’ that presumably would want ‘power’, just like humans do.

Humans who wield AI that will be the problem long before AGI could even be invented

Thinking about machines intelligence is strikingly simplistic.

It is poor plotting on the side of sci-fi writers to jump to the part of the story where AGI already exists and it is totally like a human. In reality, the whole project can and will be derailed in some way due to the idiosyncratic route of its discovery and human politics. There will be knowledge problems, then the issue of scattered information, then there will be popular backlash on the way, and they will all impact on how the project will eventually develop. Not to mention therapeutic lawmaking after every incident that will create a jungle of inane regulations down the road.

Tasks AI can be trained to perform are much less complex and serve a very defined purpose. And before these trained machines could even develop any complexity, they get used by humans to pursue their own power goals. Like pattern recognition or face recognition that had been immediately hijacked by surveillance states – and thus for power purposes. Not seeing the humans behind AI is self-inflicted blindness.

It is not self-evident that a machine would have inherent goals. Those are a very human thing stemming from needs and limited resources. Without such inherent goals neither ‘intelligence’ nor ‘power’ make any sense. Intelligence can’t even be measured without the certainty that a subject has goals and wants something (like acing the IQ test or a test animal being hungry) and power makes little sense in an environment of abundance and no inherent needs.

Humans who wield those machines, however, do want power and they will wield those machines in their very own human ways. Even if singularity could happen, ancient power plays between humans would arrive much faster.

So when the idea of an artificial general intelligence emerged, the human mind automatically attached a human personality to it (think of HAL-9000), with human-like reasoning and logic (or the absence thereof). Not only do outsiders regard the idea of an AI like a human, developers are actually working to make it more human-like. Whether ‘human-like’ is the same as ‘intelligent’ is a question rarely asked.

Can AI Develop the Will to Power?

It unilaterally adopts the viewpoint of the powerful

The manifesto adopts the viewpoint of the powerful – a staple authoritarian thinking tick.

By doing so, the underdog 1) rids himself of the fear from the powerful, and also 2) relieves himself from the painful sense of helplessness. He sees himself powerless against the overwhelming force (like the threat or the dictator) but not the dictator, so it is incomparably better to spend time in the dictator’s viewpoint – even if it’s just in your own head.

It is more comfortable and pleasant to think from the viewpoint of the not-powerless. And once you are there, you will inevitably see yourself and your fellow underdogs as pawns on a chessboard and it erodes your ability to defend the liberties of those pawns – among them yourself.

This leads to 1) smooth acceptance of the erosion and relativisation of human rights, and 2) falling into the central planning fallacy.

AI: The Central Planning Fallacy on Steroids?

Inflates and dilutes human rights

Human rights (liberties that secure individuals from state oppression) are easy to inflate and dilute. One is the zoo-approach: adding plenty of pleasant-sounding material benefits and calling them “rights”, such as free housing, free food, free whatever – then announcing that it can’t be afforded so these “rights” you still have but can’t get. And once you get used to this happening to your “rights”, you will just shrug when it happens to your liberties. You have them, they just can’t be afforded.

The second major way to take away your rights is obvious – you just can’t find an argument against it. They tell you that the pedophile terrorist migrant Muslim Jew virus is coming and your savior can’t save you if he has to observe those pesky liberties.

Rights are also damaged and taken away by inventing collective rights (which are redundant but they also prepare the way to accept the absence of rights to non-members of a group) and by assigning human rights to animals, plants and objects – such as robots.

Our hastily-worded manifesto trips up on two of these logical fallacies. Once it anthropomorphized AI, it just follows that it would assign human rights to the machines.

“just like animals have rights, our creation(s) (“machines” or whatever we call them) should have rights too when they show signs intelligence”

But even more alarmingly, the manifesto is built on the zoo approach to disempowerment.

To Erode Human Rights – Just Add Free Stuff

It answers the question “Would you like to be a pet?” exactly wrong

…and on my behalf, not just his own.

Commits the central planning fallacy – big time.

“We want to encourage machines to do things we cannot and take care of the planet in a way we seem not to be able to do so ourselves.”

“That we should think about how “machines” will integrate into society (and even have a path for becoming in charge as they become smarter and smarter) so that this whole process can be amicable and not confrontational.”

But why exactly would ‘smart’ be the same as ‘better at being in charge’? When did we make that leap? We don’t even know what ‘intelligent’ really means and what ‘power’ is. The fact that a science-worshiping engineer uses such sloppily defined concepts tells all I need to know about the field.

Accuses you of being backward if you can’t see it their way

“We believe in progress (once you have a working version of something, you can improve on it and keep making it better). Change is good, even if a bit scary sometimes.”

They implicitly call you an anti-science Luddite and the enemy of progress if you don’t welcome the idea of an AI overlord uncritically. Classic intellectual intimidation as well as an authoritarian mind control tool. Just imagine the heartbreaking scene when a family member runs off with the latest ideology – accusing the others of not seeing the future. Or trying to resist the inevitable. Or being scared of such a great development.

Ratting on dissenters

They promise to rat on those who disagree, and volunteer to keep a list of dissenters.

An actual list.

“We believe it may be important for machines to see who is friendly to their cause and who is not. We plan on doing so by keeping track of who has done what (and for how long) to help the peaceful and respectful transition.”

The implicit message being that AI will surely throw a juicy bone to the early submitters in exchange for preemptive sucking up. But that’s an unchecked assumption of authoritarian submissives, valid only to authoritarian oppressors. It is just an instinctive reaction that is not quite thought-through.

As of the telling on the naysayers, we have that in every oppressive system. Dictatorships depend on their ability to divide and atomize their populations and a reporting system is the best way to turn everyone against everyone, to break down any ties of trust and cooperation within a society. A group of humans in such a state are easy to rule – also, quite subhuman.

An authoritarian society is not only cruel to the non-believers. It is an everyone against everyone struggle among the believers as well who constantly try to elbow each other away from the life-giving mercy of the autocrat. Many people no doubt live in that state of mind – having been socialized in a hopeless autocracy – but a bunch of Silicon Valley entrepreneurs have absolutely no excuse for this attitude.

What brought upon this mental subservience is the mere prospect of an omniscient machine.

“With the internet as its nervous system, the world’s connected cell phones and sensors as its sense organs, and data centers as its brain, the ‘whatever’ will hear everything, see everything, and be everywhere at all times. The only rational word to describe that ‘whatever’, thinks Levandowski, is ‘god’—and the only way to influence a deity is through prayer and worship.”

“This time you will be able to talk to God, literally, and know that it’s listening.”

The really creepy thing is how he regards a tyrant’s view of those who helped him to get power:

What we want is the peaceful, serene transition of control of the planet from humans to whatever. And to ensure that the ‘whatever’ knows who helped it get along.”

“I would love for the machine to see us as its beloved elders that it respects and takes care of. We would want this intelligence to say, ‘Humans should still have rights, even though I’m in charge.’”

“Do you want to be a pet or livestock?”

“We give pets medical attention, food, grooming, and entertainment. But an animal that’s biting you, attacking you, barking and being annoying? I don’t want to go there.”

Sadly, power cannot be negotiated with that way. Neither by prayer, nor by being its frail elder. The same way mafia bosses don’t keep you in their good books long after you lose your usefulness and autocrats don’t remember fondly those who helped them into power. You cannot stay in power if you allow for such complacency. Why would a machine would be any different? Especially since there would be humans setting its priorities – otherwise there is simply nothing that can motivate a machine to do anything.

Besides, if allowed to follow logic autonomously (and not strong-armed into human conclusions and pursue human power goals), an AI would recognize the pattern of human submission – and use it. All the submissive types (authoritarian thinkers) need to trigger their submission is a power that is (perceived to be) overwhelming, dependence to seem absolute, and the only successful strategy of survival to be bonding with the powerful.

Whether the machine has its own goal or just needs power to execute its human masters’ goals, an AI could conclude that submission and enthusiastic love by humans comes handy.

Worshipping AI Is The Wrong Coping Strategy

If the problem is a malevolent AI, the worshiping strategy wouldn’t even work.

After all, a machine may become intelligent (a painfully underdefined concept in the church’s manifesto) but that doesn’t mean it would conclude that it needs power over humans. Nor does it mean that worship impresses it. Or ratting on other humans who stand in the way of the machine.

What’s important to a machine is coded in its priorities. It will be decided by the machine, and however logical this manifesto sounds when one hears it for the first time, it is not at all obvious that a machine would have any use of a blacklist of unbelievers. Or of the rat who supplies it.

Follow us on Facebook, Twitter @_MwBp

Mwbp Bitcoin

1CXq3Bddt8WphouL91GTAFBXcvbsh5T49D

* Disclaimer: This post is not about various ways AI and machine learning can influence the world and the workplace. This is not a Luddite rant or an anti-AI manifesto. This post is about the appallingly authoritarian reactions of unreflexive human minds to the idea of Singularity – aka. the rise of an AI overlord, aka. an artificial general intelligence capable to assume the will to power. You don’t need to read code or be a Luddite to have a strong opinion about these people.

** I have noticed that the whole church of AI might be just an amusing joke, a fundraising scheme, a tax avoidance scheme, or a tech bro’s vision of what may or may not be The Next Big Thing. It can be a startup in the legal form of a church or a way to exploit the US’ religious privileges loophole in the law. It’s manifesto is still telling. 

References

https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/

https://www.wired.com/story/a-short-history-of-technology-worship/

https://www.theguardian.com/technology/2017/sep/28/artificial-intelligence-god-anthony-levandowski

https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger/

https://www.cnet.com/news/the-new-church-of-ai-god-is-even-creepier-than-i-imagined/

http://www.wayofthefuture.church/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.