by Glen Wallace
Often when some new contract, real estate development, factory or mining operation is objected to for a variety of reasons whether it be environmental or geopolitical, the canned retort is presented that the proposal will generate jobs. Well, if all those jobs are replaced by robots, then that 'playing card' of the proponents of the morally dubious proposal is eliminated. Case in point is the proposed copper nickel sulfide mining operation in Northern Minnesota. Even though it has been proven that sulfide mining cannot be done in an environmentally safe manner, many of the residents of the areas affected along with politicians representing them support the proposal for the sole reason of all the jobs that supposedly be generated for the economically struggling part of the state. But if most of those jobs are automated with robots as it appears they will, then that sole reason is removed from the equation and thus any justification for the sulfide mine is correspondingly also eliminated.
The same principle holds true for defense contracts with Saudi Arabia. The only justification given for following through with those contracts despite the brutal genocidal war the Saudis are perpetrating against the people of Yemen, is all the jobs created for US defense contractor employees. While I don't think those defense contracts should be granted no matter how many jobs would be created, all those armament manufacturing jobs could eventually be done by robots. When that happens, the bargaining chip of job creation will be off the table for politicians and contractors to use when trying to persuade the public to sign off on morally corrupt arm sales deals.
The risks, on the other hand, of a robot revolution are many and varied. While the problems with wealth distribution, detachment problems relating to tendency towards human psychological identity with one's job, have been discussed widely, I would like to touch on another angle relating to the danger of robots turning on humans -- it is an angle I haven't read or heard anyone else discuss and perhaps is worthy of an essay devoted just to it, but for now I want to at least get the idea out there and maybe later I will stretch it into a complete essay. The Chinese government has recently instituted a social credit system. The social credit system is problematic in its own right but if we take as a given that the arguments against will not persuade the Chinese government to ditch the system, perhaps they may want to do so out of pure practical necessity. I'm sure that the credit system already utilizes some degree of AI to dole out or remove credits for the citizens. Whether due to nefarious deliberate hacking sabotage or unintended AI consequences, the potential exists for the AI algorithm to decide the Chinese government leadership should have its own social credit rating downgraded to the point where they can no longer function in a decision making capacity for their own country. At that point the AI computer making the social credit deliberations will take over control of the Chinese government after rending the former human occupants of government obsolete.