Why we don’t need to fear the rise of the robots

Artificial intelligence, robots and automation! Do they worry you? Are you concerned about their impact on your career? Do you believe that your job is probably not as difficult to automate and do, as you think it is. In this post I explain why you shouldn’t get too distressed. And why it’s important that UK leads the way.

Donald Trump is barking up the wrong tree on the jobs issue. He’s preoccupied with other nations taking US jobs, when what he should be worried about is the threat from machines. Over the next five to 15 years, a robot led technology tsunami is going to hit the US job market. The best research to date suggests that almost half of all US jobs: some 80 million of them could be lost as a result of advances in artificial intelligence, 313 manufacturing, robots, driverless vehicles and other emerging technologies. This is no longer science fiction.

Winning the AI War

In fact, the nation that wins the race to develop artificial intelligence will be able to become:

the ruler of the world

So predicted Vladimir Putin last year. That’s why it’s essential that the US and UK maintains the edge in this technology, rather than ceding it to Russia or China. Both of these countries are pulling out all the stops to win Al dominance. But here in the West we need to set ethical boundaries on where it ends up. For example, earlier this year, Google CEO Sundar Pichai declared that robots and artificial intelligence were going to have a more profound impact on the world than electricity or fire. But then he promised that there would be one area in which Google would refrain from unleashing its vast potential: weaponry. After all, war is the most profitable business there is, why wouldn’t capitalists want a piece of the action, morals rarely come in to making money.

Project Maven

Then Google appeared change their mind, when they entered into its first major AI contract with the Pentagon. Project Maven, as it is known. It uses machine learning and engineering talent to distinguish people and objects in drone videos, established in a memo by the U.S. Deputy Secretary of Defense on 26 April 2017. Also known as the Algorithmic Warfare Cross Functional Team, it is, according to Lt. Gen. of the United States Air Force Jack Shanahan in November 2017, a project designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department.

Its chief, U.S. Marine Corps Col. Drew Cukor, said:

People and computers will work symbiotically to increase the ability of weapon systems to detect objects.

At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a deliberate workflow process was funded by the Department [of Defense] through its rapid acquisition authorities for about “the next 36 months”.

Cause No Harm

Google then went on to declare that it won’t in fact renew the Maven contract when it expires. Furthermore, it will eschew any technologies that cause or are likely to cause overall harm. This about-turn is not the result of an epiphany on Pichai’s part. It’s the result of pressure from his own staff, thousands of whom protested against Project Maven. They weren’t won over by claims that this was a one-off project, initially said to be worth $9m, with strictly non-offensive purposes.

They rightly saw it as an audition for a much deeper collaboration with the Pentagon. It was part of Google’s push to win the $10bn Joint Enterprise Defence Infrastructure (Jedi) contract. Described by some as potentially the largest IT procurement project in history. Jedi is designed to set up a cloud computing system that can network American forces all over the world and integrate them with AI.

Politicised Tech Workers

So, thankfully, tech workers have become politicised in the Trump era. The end of Project Maven is a big win against US militarism. Arguments over robots and automation are spurious as aim primarily is to shorten the loop between detection and response, that gives best chance of minimising collateral damage. Maintaining people in the loop over final decision has always been a military aim in limited war to minimise chance of error, and with that risk of political defeat.

But in the end, conflict is uncertain, with risk; putting that risk mainly on an enemy matters most.


The desire of tech workers to build things that help rather than harm is commendable, and we should certainly debate the ethics of new technologies with military applications. But a key part of ensuring that robots and Al is used ethically is making sure that authoritarian regimes don’t get to dominate the field and set the rules. Silicon Valley takes pride in advancing new ideas. But it should not dismiss some old ideas, including the one that holds that America is, overall, a force for good.

Of course, military isn’t the only area where we need to proceed with caution. Here’s a couple of other examples:

GP at Hand and Babylon Health

Another area where we need to tread carefully is in health care. For example, should we use an app called GP at Hand as the first stop for primary care? The firm that developed it, Babylon Health, claims it can identify ailments as effectively as a real doctor, but is this true?

The aim behind it is laudable: to filter out the estimated 20% of consultations that don’t require a fully trained doctor. But do the complexities of patients’ symptoms confound algorithmic analysis? Could GP at Hand’s algorithms could indicate that we need more human GPs, not more sophisticated AI?

Car Trouble

Another example is driverless cars. Do you believe that they will put people out of business? If you do then, in my opinion, you’re wrong. These vehicles may work on motorways, but I doubt they’ll ever be able to cope with congested urban streets. Or even winding, narrow rural lanes. Like many other technological advances, robots and this new innovation will not to push people out of jobs. Rather it will make lives easier by helping drivers rather than replacing them.

Robots; What’s Next?

Every wave of automation triggers warnings of mass unemployment which never materialise. In the early 1800s, the Luddites smashed up textile machinery. They done this because they thought automation would affect their job. Each technological advance creates more jobs than it destroys. Yet, there is something distinctive about the threat of today’s automation. This is AI, and it lies in what’s known as Moravec’s paradox.

Moravec’s Paradox

This is the discovery that, contrary to traditional assumptions, high-level reasoning requires very little computation. But low-level sensorimotor skills need enormous computational resources. Hans Moravec, Rodney Brooks, Marvin Minsky and others articulated this in the 1980s. As Moravec writes:

… it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers. But it is difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

Similarly, Marvin Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious:

In general, we’re least aware of what our minds do best. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly.

So, AI finds the difficult things easy and the easy things difficult. They easily outperform adults in logic. But a one-year-old can out-pace them in the basic functions of perception and mobility. For me, the real risk with increasing reliance on AI is not more joblessness, but growing inequality.


Robots and AI mean that the returns to the owners of sophisticated computer capital keep going up . And the increasingly deskilled workforce keep going down. This will be especially prevalent for middle-income workers. These are the ones whose jobs depend on logic and number crunching. Robots are displacing them and they having to take low paid jobs like waiting and bartending.

That explains what has long puzzled economists. More and more jobs are being created, yet wages remain static. For me governments need to intervene. If they don’t then the future economy will be this: a tiny number of rich people employ armies of poor ones. The reason? To do menial and trivial tasks.

In conclusion

Politicians need to make sure that society can share the benefits of robots and automation. And the easiest way to do this is via a more equal allocation of time. In Britain, the proportion of our lives we spend at work as opposed to sleeping, in education etc. has shrunk. Its fallen from about 25% a hundred years ago, to 10% today. If that percentage falls more thanks to computer-human symbiosis, then everybody can gain.

Do you agree, or disagree? Why not leave a comment below?

Photo by Alex Knight on Unsplash

Leave a Reply