Silicon Valley: Problem or Solution?

problem solution

 ‘There is no more neutrality in the world.
You either have to be part of the solution,
 or you’re going to be part of the problem.’

Eldridge Cleaver

The high tech high rollers build the robots, code the algorithms, and wire up the machine intelligence that threaten jobs. If they’re the problem, what’s their the solution?

Elon Musk:  Universal basic income is “going to be necessary” because “there will be fewer and fewer jobs that a robot cannot do better,”

Richard Branson:  “A lot of exciting new innovations are going to be created, which will generate a lot of opportunities and a lot of wealth, but there is a real danger it could also reduce the amount of jobs. Basic income is going to be all the more important. If a lot more wealth is created by AI, the least that the country should be able to do is that a lot of that wealth that is created by AI goes back into making sure that everybody has a safety net.”

Mark Zuckerberg:  “The greatest successes come from having the freedom to fail. Now it’s our time to define a new social contract for our generation. We should explore ideas like universal basic income to give everyone a cushion to try new things.”

Sam Altman:  “Eliminating poverty is such a moral imperative and something that I believe in so strongly. There’s so much research about how bad poverty is. There’s so much research about the emotional and physical toll that it takes on people.” (Altman’s company Y Combinator is conducting its own UBI experiment in Oakland.)

Ideas like this get labelled “progressive,” meaning “ahead of their time, which in turn means “over my dead body.” We saw a few posts back that Pres. Johnson’s visionary Triple Revolution Report and National Commission on Technology, Automation, and Economic Progress ended up in the dustbin of history. Another technology/jobs initiative had already landed there two decades earlier:

“In 1949, at the request of the New York Times, Norbert Wiener, an internationally renowned mathematician at the Massachusetts Institute of Technology, wrote an article describing his vision for future computers and automation. Wiener had been a child prodigy who entered college at age eleven and completed his PhD when he was seventeen; he went on to establish the field of cybernetics and made substantial contributions in applied mathematics and to the foundations of computer science, robotics, and computer-controlled automation.

“In his article — written just three years after the first true general purpose electronic computer was built at the University of Pennsylvania — Wiener argued that ‘if we can do anything in a clear and intelligible way, we can do it by machine’ and warned that this could ultimately lead to ‘an industrial revolution of unmitigated cruelty” powered by machines capable of ‘reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price.’”

Rise of the Robots: Technology and the Threat of a Jobless Future, Martin Ford.

Wiener’s article was never published, and was only recently (in 2012) discovered in MIT’s archives. Outspoken technology commentator Douglas Rushkoff hopes UBI meets a similar end. In a recent Medium piece, he called UBI “Silicon Valley’s Latest Scam.”[1] His main critique? UBI doesn’t go far enough:

“They will basically tell you that a Universal Basic Income is a great idea and more effective than any other method of combating technological unemployment, the death of the Middle Class and the automation of the future of work.

“They don’t propose a solution to wealth inequality, they only show a way to prevent all out mass social unrest and chaos, something that would inconvenience the state and elite.

“The bottom 60% of the economy, well what do you suppose is in store for us with the rise of robots, machine learning and automation …?

“California might get a lot of sunshine and easy access to VC, but they aren’t blessed with a lot of common sense. They don’t know the pain of rural America, much less the underclass or warped narrative primed by Facebook algorithms or the new media that’s dehumanized by advertising agents and propaganda hackers.

“What if receiving a basic income is actually humiliating and is our money for opioids and alcohol, and not for hope that we can again join a labor force that’s decreasing while robots and AI do the jobs we once did?

“The problem lies in the fact that there won’t be a whole lot of “new jobs” for the blue and white collar workers to adapt to once they sink and become part of the permanent unemployed via technological unemployment.

“With housing rising in major urban centers, more folk living paycheck-to-paycheck, rising debt to income ratios and less discretionary spending, combined with many other factors, the idea of a UBI (about the same as a meagre pension) saving us, sounds pretty insulting and absurd to a lot of people.

“Since when did capitalism care about the down trodden and the poor? If we are to believe that automation and robots really will steal our jobs in unprecedented numbers, we should call Basic Income for what it is, a way to curtail social unrest and a post-work ‘peasant uprising.’

“Getting [UBI] just for being alive isn’t a privilege, it’s a death sentence. We are already seeing the toll of the death of the middle class on the opioid epidemic, on the rise of suicide, alcoholism and early death all due to in part of the stress of a declining quality of life since the great recession of 2008.”

If UBI doesn’t go far enough, then what does? Mark Zuckerberg used the phrase “new social contract” in his quote above. More on that coming up.

[1] UBI advocacy group BIEN (Basic Income Earth Network) reported Rushkoff’s opinions in a recent newsletter, and described his alternative:  Universal Basic Assets.

Race Against the Machine Part 2

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

“Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.”

hawking musk gates

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.