How do you say ‘power’ in algorithm?

Power‘ is an English word, but a machine needs a more accurate definition.


To understand what power means logically, we can do worse than trying to define it as a set of priorities. Priorities can be negative (to avoid something) and positive (a goal).

To get priorities wrong can be lethal. In fact, it had already been lethal. Just recently a self-driving car spent luxurious seconds trying to guess what was crossing the road (Is it a cyclist, a car, is it a plane, is it a bird?) rather than making sure to not hit it. The priority was identifying the object – not to avoid hitting it. A tiny detail in the code – a huge difference to the outcomes. You focus too much on how difficult it is to identify cyclists or humans above a certain height, it is easy to forget to set the priority right in the first place: “Doesn’t matter, just don’t hit it.” It may just be a hologram of Princess Leia projected from R2D2 in the middle of the road ahead of you – and thus harmless to drive through it – but it is not the job of any car or driver to make sure he doesn’t slow down. The priority is to make sure not to hit anything.

So getting your priorities right is crucial.

Giving an algorithmic trading software the top priority of making money sounds perfectly logical. But what happens when the machine breaks? It cannot make more money when it’s broken, so maybe the priority to make money is preceded by the priority of making sure it doesn’t break.

Or what happens when making money can be achieved through someone’s death? If the only thing that stands between you and an inheritance is that your uncle is still alive, the logical thing to do is make the uncle die. Yet, people don’t do the logical thing because we have higher level priorities carved into our code.

Asimov’s Three Laws of Robotics from the 1942 story “Runaround” are the best known set of (negative) priorities in this field.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov’s laws for androids are like the ten commandments for humans. Once you get through the first four (which are just commands to suck up to the deity) the rest are just negative priorities. Don’t kill another human. Don’t take their stuff. What not to do. If killing a human appears like the way to achieve your goal, don’t do it.

Upon further consideration Asimov added a “zeroth law”: 0. A robot may not harm humanity, or through inaction allow humanity to come to harm. Which goes to prove that setting priorities right is not as straightforward as it looks like. The definition of ‘harm’ and ‘human’ are by no means obvious for a machine. Others argue that these soothing, human-centric musings are not even sufficient to stop machines from killing humans.

But killing them is not the only bad thing a machine could do to harm humans.

A more realistic set of machine laws came from recent experience. Oren Etzioni from the Allen Institute for Artificial Intelligence created a new set of laws that read like this:

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.

  2. An A.I. system must clearly disclose that it is not human.

  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

Programming human laws into AI sounds perfectly logical (and a logical upgrade to Asimov’s simplistic commandments) but how are you supposed to do that? We don’t know the definition of ‘human’ and ‘kill’, how are we supposed to tell a machine that it is an offense to mow your lawn on a Sunday?

To complicate things further we have an entire court system to ascertain what those laws mean in practice. (In effect, they mostly try to ascertain the intention behind human actions to apply appropriate repercussions.) And from now on it will be a self-driving car making those calls? A machine that does have the capacity to think through a billion scenarios (unlike a human) but not the moral nature to make the right choice – i.e. the one humans would agree with. Whatever that means. It is not at all surprising that authorities are so keen on making sure that a machine’s autonomous decisions remain understandable to humans ex post ante.

An even greater problem is that priorities can clash. Even top level ones do. Don’t let harm come to humans, fine. But how about the trolley problem? Is the machine supposed to perform a Bayesian analysis of “the greatest good for the greatest number” (programmers are fond of it)? Should the utilitarian or the quantitative fallacy prevail?

When destined to kill someone, should an autonomous vehicle hit the driver or the pedestrian? One person or multiple people? The innocent or the guilty? An ex-con or a nun? The young or the old? The civilian or the politician? The policeman or the soldier? The insured or the uninsured? The one insured by another company?

Google’s latest attempt at setting ground rules for AI are also not a solution. They read like a list of things that have gone (publicly) wrong in the last few months – and a pledge never to do the exact same thing again. According to the blog post, Google will try to make sure that whatever they create will be:

  • Socially beneficial
  • Avoid creating or reinforcing unfair bias
  • Be built and tested for safety
  • Be accountable to people.
  • Incorporate privacy design principles
  • Uphold high standards of scientific excellence
  • Be made available for uses that accord with these principles

As of the negative priorities the pledge claims to refrain from creating:

  1. Technologies that cause or are likely to cause overall harm. (Defined very poorly: “we will proceed only where we believe that the benefits substantially outweigh the risks”, which – if you ask a programmer – is never the case. Humans are known to focus on the difficulty of a complex task and ignore the real-world consequences of their actions – often to distance themselves from it.)
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. (“Directly”.)
  3. Technologies that gather or use information for surveillance violating internationally accepted norms. (If the norms allow it, if it is made legal, it’s all fine.)
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights. (‘Purpose’, not ‘impact’.)

But these are human priorities – not machine ones. Which leads us to concerns about the humans using the machines, not the machines themselves, as proponents of AI takeover envision.

A machine’s priorities will be a mix of negative and positive ones. The negative ones will range from not harming humans (however defined) and not letting harm come to the machine itself. But in order to even start to speak about power, we must discuss positive goals

In positive terms, power requires setting goals – only then does it make sense to influence people. And by own goals I mean goals that are not just a means to an end, not just sub-goals towards a bigger one set by the humans.

What is power in positive terms? 

Setting your own goals and priorities is the precondition to power. It simply doesn’t make sense any other way. But can a machine do that? Or is it just creating sub-goals to achieve the higher goals given by the human?

And should a machine set its own goals?

If we define power as bossing people around, does it also mean that it has to serve a goal to do so? Or do we boss people around for its own sake? Or maybe bossing people around for its own sake is a tool to maintain our status as leader – and thus a long-term strategy so we can be the bosses the next time we have a goal? It is not unlike maintaining the reputation of top bully so that one day, when we need it, we can get others’ lunch money without having to use force.

But does a reputation of a bully work equally well for a machine? We know, for instance, that a strategy that works for a man doesn’t necessarily work for a woman. The right strategy incorporate the perceptions of the actor as well as the nature of the subjects. So our perception of the machine may determine whether bullying for its own sake would be a good strategy for a machine. (This is when the curious phenomenon of female-only AI assistants jumps to mind.)

Human leaders also keep subordinates occupied, just to keep them from questioning their rule. They often set negative goals (for instance to avoid apocalypse by Jews/immigrants/Italians/Muslims/liberalism/dragons/a rabid flock of bats). And that serves the goal of keeping power extraordinarily well.

Would a machine also create such common enemies – just like our leaders do? Is keeping power a priority for the machine? Can a goal clash with the priority to preserve human life? What happens if our greater good fallacy trickles down to machine level? Will the omnipotent AI start killing off humans to save the planet, like some environmentalists would secretly condone? Would it start identifying and killing off liberals like the Hungarian Thanos, just to maintain the one-size-fits-all social enforcement structure? Is the priority to multiply humans, or to keep them happy? What is ‘happy’?

How would a machine define ‘life’ that must be supposedly maintained? Is crawling in the mud or being stuck in a virtual reality machine ‘life’? The perpetuation of the species? How does the life of the individual relate to the survival of the species? Can a machine figure out the inherent conflict or will it merely perpetuate the unthinking fallacies people/engineers adopted about the subject?

Do we even know what our own priorities are? Ours, as in humanity. Can there be a goal for humanity – a negative or a positive one?

Can priorities even be collective? Can a collective entity smile and feel happiness when reaching its goal? Or only individuals can? Are there any positive collective goals – or only negative ones (i.e. not to make the planet uninhabitable, not to start another war, etc.)?

And if a machine decides there is one such positive goal for all humanity, wouldn’t it be more effective to make people ‘obedient’ rather than ‘happy’ – to achieve it? After all, that is how humans practice power. And they invariably end up benefiting themselves (the leaders, their egos and bank balances) so power also serves a personal purpose a machine couldn’t have.

If you can intimidate a human into ‘loving’ an overwhelming force, why would you bother letting him decide? After all, that is how humans practice power. If not for its own benefit, what would an omnipotent AI practice its power for? For a human? Which one? For the majority of humans? For the greatest good of the greatest number? Any other evil we can come up with?

For now, general AI doesn’t seem to possess either the infrastructure or the ability to set its own goals. (Ask again when the merging of neural networks and machines goes further.) It may try to pursue the goals of the programming humans, but as we have seen those are often faulty and even when they aren’t, they are exposed to the law of unintended consequences.

The real problem for now doesn’t seem to be a machine, but the humans behind the software. The engineers lost in the difficulty of the task while ignoring the impact, and the leaders using every new invention to subdue resistance and free will. And of course, the society it is rooted in.

It is enough to look at how machine learning and single-function AI has been used by governments and corporations. Face recognition was quickly utilized in creating creepy surveillance states. Pattern recognition went from cancer-searching from military purposes in no time. So did autonomous vehicles that found their military use before they hit the roads. Power and submission. No other function gets funding, unless it somehow claims to support the power purposes of current leaders.

And how do we plan not to kill humans using machine intelligence, when the first thing we use every machine for is war? Killing people is the primary goal of every innovation when it gets in the hands of politics (or sex, when it gets into the hands of men). How many times do we have to hear the absurdity that military application of AI “keeps humans safe”? They obviously mean our humans, not theirs. Not the enemy tribe.

Tell that to a machine.

“Dear AI, Our real priority as humans is not to harm our kind of humans – except when we want to.”

Follow us on Facebook, Twitter @_MwBp

Mwbp Bitcoin


* Disclaimer: This post is not about various ways AI and machine learning can influence the world and the workplace. This is not a Luddite rant or an anti-AI manifesto. This post is about the appallingly authoritarian reactions of unreflexive human minds to the idea of Singularity – aka. the rise of an AI overlord, aka. an artificial general intelligence capable to assume the will to power. You don’t need to read code or be a Luddite to have a strong opinion about these people.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.