In an article entitled, “You Will Lose Your Job to a Robot—and Sooner Than You Think,” writer Kevin Drum states:
“The AI Revolution will be nothing like [the industrial revolution]. When robots become as smart and capable as human beings, there will be nothing left for people to do because machines will be both stronger and smarter than humans. Even if AI creates lots of new jobs, it’s of no consequence. No matter what job you name, robots will be able to do it. They will manufacture themselves, program themselves, repair themselves, and manage themselves. If you don’t appreciate this, then you don’t appreciate what’s barreling toward us.”
He points out that robots will be cheaper, faster and more reliable than humans—a combination that no good capitalist would dare ignore.
McKinsey research similarly reports that up to one-third of the US workforce could be out of a job by 2030 due to automation, and between 400 million and 800 million globally could be displaced by automation.
Imagine the social upheaval caused by such mass unemployment when we’re already seeing anger and resentment caused by the comparatively modest displacements we’ve experienced so far.
Don’t think you’re safe because you’re not in fast food, manufacturing, or retail. Kinsey says that automation would also impact “jobs that involve managing people, expertise, and those that require frequent social interactions.”
And it’s not just scholarly luddites sounding the alarm. The darling of the tech hordes, Elon Musk himself, is among the AI Cassandras. He said:
“The risks of digital super intelligence—and I want you to appreciate that it wouldn’t just be human level. It would be super human almost immediately. It would just zip right past humans to be way beyond anything we could really imagine. A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. So I think the right emphasis for AI research is on AI safety. We should put vastly more research into AI safety than we should into advancing AI in the first place because it may be good and it may be bad. And it could be catastrophically bad and there could be the equivalent of a nuclear meltdown.”
Historically, we humans have never been very good at acknowledging, much less pre-empting the consequences of our actions. For the sake of us all, let’s hope the impacts of automation and AI become the exception. How do global hordes of the desperate unemployed sound to you?
–Leonce Gaiter, Vice President, Content & Strategy