Fireflies and Algorithms

fireflies

We’ve been looking at workfare — the legislated link between jobs and the social safety net. An article published last week  — Fireflies And Algorithms — The Coming Explosion Of Companies[1] brought the specter of workfare to the legal profession.

Reading it, my life flashed before my eyes, beginning with one particular memory:  me, a newly-hired associate, resplendent in my three-piece gray pinstripe suit, joining the 4:30 queue at the Secretary of State’s office, clutching hot-off-the-word-processor Articles of Incorporation and a firm check for the filing fee, fretting whether I’d get my copy time-stamped by closing time. We always had to file today, for reasons I don’t remember.

Entity choice and creation spanned transactional practice:  corporate, securities, mergers and acquisitions, franchising, tax, intellectual property, real property, commercial leasing….  The practice enjoyed its glory days when LLC’s were invented, and when a raft of new entity hybrids followed… well, that was an embarrassment of riches.

It was a big deal to set up a new entity and get it just right — make sure the correct ABC acquired the correct XYZ, draw the whole thing up in x’s and o’s, and finance it with somebody else’s money. To do all that required strategic alliances with brokers, planners, agents, promoters, accountants, investment bankers, financiers…. Important people initiated the process, and there was a sense of substantiality and permanence about it, with overtones of mahogany and leather, brandy and cigars. These were entities that would create and engage whole communities of real people doing real jobs to deliver real goods and services to real consumers. Dissolving an entity was an equally big deal, requiring somber evaluation and critical reluctance, not to mention more time-stamped paperwork.

Fireflies And Algorithms sweeps it all away — whoosh! just like that!– and describes its replacement:  an inhuman world of here-and-gone entities created and dissolved without the intent of all those important people or all that help from all those people in the law and allied businesses. (How many jobs are we talking about, I wonder — tens, maybe hundreds of thousands?) The new entities will do to choice of entity practice what automated trading did to the stock market, as described in this UCLA Law Review article:

“Modern finance is becoming an industry in which the main players are no longer entirely human. Instead, the key players are now cyborgs: part machine, part human. Modern finance is transforming into what this Article calls cyborg finance.”

In that “cyborg finance” world,

“[The “enhanced velocity” of automated, algorithmic trading] has shortened the timeline of finance from days to hours, to minutes, to seconds, to nanoseconds. The accelerated velocity means not only faster trade executions but also faster investment turnovers. “At the end of World War II, the average holding period for a stock was four years. By 2000, it was eight months. By 2008, it was two months. And by 2011 it was twenty-two seconds….

Fireflies And Algorithms says the business entity world is in for the same dynamic, and therefore we can expect:

“… what we’re calling ‘firefly companies’ — the blink-and-you-miss-it scenario brought about by ultra-short-life companies, combined with registers that remove records once a company has been dissolved, meaning that effectively they are invisible.”

Firefly companies are formed by algorithms, not by human initiative. Each is created for a single transaction — one contract, one sale, one span of ownership. They’re peer-reviewed, digitally secure, self-executing, self-policing, and trans-jurisdictional — all for free or minimal cost. And all of that is memorialized not in SOS or SEC filings but in blockchain.

“So what does all this mean?” the article asks:

“How do we make sense of a world where companies — which are, remember, artificial legal constructs created out of thin air to have legal personality — can come into existence for brief periods of time, like fireflies in the night, perform or collaborate on an act, and then disappear? Where there are perhaps not 300 million companies, but 1 billion, or 10 billion?”

Think about it. And then — if it hasn’t happened yet — watch your life flash before your eyes.

Or if not your life, at least your job. Consider, for example, a widely-cited 2013 study that predicted 57% of U.S. jobs could be lost to automation. Even if that prediction is only half true, that’s still a lot of jobs. And consider a recent LawGeex contest, in which artificial intelligence absolutely smoked an elite group of transactional lawyers:

“In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

“The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes.”

These developments significantly expand the pool of people potentially needing help through bad times. Currently, that means workfare. But how can you have workfare if technology is wiping out jobs?

More on that next time.

[1] The article was published by OpenCorporates, which according to its website is “the world’s largest open database of the corporate world and winner of the Open Data Business Award.”

The Rentier Economy: A Primer (Part 2)

My plan for this week’s post was to present further data about the extent of the rentier economy and then provide a digest of articles for further reading.

Turns out that wasn’t so easy. The data is there, but it’s mostly buried in categories like corporate capitalization, profits, and market concentration. Extracting it into blog post sized nuggets wasn’t going to be that easy.

Further, the data was generally only footnoted in a maelstrom of worldwide commentary. Economists and journalists treated it as a given, barely worthy of note, and were much more interested in revealing, analyzing, and debating what it means. The resulting discourse spans the globe — north to south, east to west, and all around the middle — and there is widespread agreement on the basics:

  • Economic thinking has traditionally focused on income from profits generated from the sale of goods and services produced by human labor. In this model, as profits rise, so do wages.
  • Beginning in the 1980’s, globalization began moving production to cheap labor offshore.
  • Since the turn of the millennium, artificial intelligence and robotics have eliminated jobs in the developed world at a pace slowed only by the comparative costs of technology vs. human labor.
  • As a result, lower per unit costs of production have generated soaring profits while wages have stagnated in the developed world. I.e., the link between higher profits and higher wages no longer holds.

Let’s pause for a moment, because that point is huge. Erik Brynjolfsson, director of the MIT Center for Digital Business, and Andrew McAfee, principal research scientist at MIT, wrote about it in their widely cited book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014). The following is from a chapter-by-chapter digest  written by an all-star cast of economists:

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress.

On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

Okay, point made. Let’s move on to the rest of the rentier story:

  • These trends have been going on the past four decades, but increased in velocity since the 2007-2009 Recession. The result has been a shift to a new kind of job market characterized by part-time, on-demand, contractual freelance positions that pay less and don’t offer fringe benefits. Those who still hold conventional jobs with salaries and benefits are a dying breed, and probably don’t even realize it.
  • As non-wage earner production has soared, so have profits, resulting in a surplus of corporate cash. Low labor costs and technology have created a boom in corporate investment in patents and other rentable IT assets.
  • Rent-seeking behavior has been increasingly supported by government policy — such as the “regressive regulation” and other “legalized monopoly” dynamics we’ve been looking at in the past few weeks.
  • The combination of long-term wage stagnation and spiraling rentier profits has driven economic inequality to levels rivaled only by pre-revolutionary France, the Gilded Age of the Robber Barons, and the Roaring 20’s.
  • Further, because the rentier economy depends on government policy, it is particularly susceptible to plutocracies, oligarchies, “crony-capitalism,” and other forms of corruption, leading to public mistrust in big business, government, and the social/economic elite.
  • These developments have put globalization on the defensive, resulting in reactionary politics such as populism, nationalism, authoritarianism, and trade protectionism.

As you see, my attempt to put some numbers to the terms “rent” and “rentier” led me straight into some neighborhoods I’ve been trying to stay out of in this series. Finding myself there reminded me of my first encounter with the rentier economy nine years ago, when of course I had no idea that’s what I’d run into. I was at a conference of entrepreneurs, writers, consultants, life coaches, and other optimistic types. We started by introducing ourselves from the microphone at the front of the room. Success story followed success story, then one guy blew up the room by telling how back in the earliest days of the internet, he and Starbucks’ Howard Schultz spent $250K buying up domain names for the biggest corporations and brand names. Last year, he said, he made $76 Million from selling or renting them back.

He was a rentier, and I was in the wrong room. When it was my turn at the mic, I opened my mouth and nothing came out. Welcome to the real world, my idealistic friend.

As it turns out, following the rentier pathway eventually leads us all the way through the opinionated commentary and current headlines to a much bigger worldwide issue. We’ll go there next time.

On the Third Hand Cont’d.

working robot

Will the machines take over the jobs?

In a recent TED talk, scholar, economist, author, and general wunderkind Daniel Susskindl[1] says the question is distracting us from a much bigger and more important issue:  how will we feed, clothe, and shelter ourselves if we no longer work for a living?:

“If we think of the economy as a pie, technological progress makes the pie bigger. Technological unemployment, if it does happen, in a strange way will be a symptom of that success — we will have solved one problem — how to make the pie bigger — but replaced it with another — how to make sure that everyone gets a slice. As other economists have noted, solving this problem won’t be easy.

“Today, for most people, their job is their seat at the economic dinner table, and in a world with less work or even without work, it won’t be clear how they get their slice. This is the collective challenge that’s right in front of us — to figure out how this material prosperity generated by our economic system can be enjoyed by everyone in a world in which our traditional mechanism for slicing up the pie, the work that people do, withers away and perhaps disappears.

Guy Standing, another British economist, agrees with Susskind about this larger issue. The following excerpts are from his book The Corruption of Capitalism. He begins by quoting Nobel prizewinning economist Herbert Simon’s 1960 prediction:

“Within the very near future – much less than twenty-five years – we shall have the technical capacity of substituting machines for any and all human functions in organisations.”

And then he makes these comments:

“You do not receive a Nobel Prize for Economics for being right all the time! Simon received his in 1978, when the number of people in jobs was at record levels. It is higher still today. Yet the internet-based technological revolution has reopened age-old visions of machine domination. Some are utopian, such as the post-capitalism of Paul Mason, imagining an era of free information and information sharing. Some are decidedly dystopian, where the robots — or rather their owners — are in control and mass joblessness is coupled with a ‘panopticon’ state[2] subjecting the proles to intrusive surveillance, medicalized therapy and brain control. The pessimists paint a ‘world without work.’ With every technological revolution there is a scare that machines will cause ‘technological unemployment’. This time the Jeremiahs seem a majority.

 “Whether or not they will do so in the future, the technologies have not yet produced mass unemployment… [but they] are contributing to inequality.

“While technology is not necessarily destroyed jobs, it is helping to destroy the old income distribution system.

“The threat is technology-induced inequality, not technological unemployment.”

Economic inequality and income distribution (sharing national wealth on a basis other than individual earned income) are two sides of the issue of economic fairness — always an inflammatory topic.

When I began my study of economics 15 months ago, I had never heard of economic inequality, and income distribution was something socialist countries did. Now I find both topics all over worldwide economic news and commentary and still mostly absent in U.S. public discourse (such as it is) outside of academic circles. On the whole, most policy-makers on both the left and right maintain their allegiance to the post-WWII Mont Pelerin neoliberal economic model, supported by a cultural and moral bias in favor of working for a living, and if the plutocrats take a bigger slice of pie while the welfare rug gets pulled on the working poor, well then so be it. If the new robotic and super-intelligent digital workers do in fact cause massive technological unemployment among the humans, we’ll all be reexamining these beliefs, big time.

Finland flag smaller

I started this series months ago by asking whether money can buy happiness, citing the U.N.’s World Happiness Report. The 2018 Report was issued this week, and who should be on top but… Finland! And guess what — among other things, factors cited include low economic inequality and strong social support systems (i.e., a cultural value for non-job-based income distribution). National wealth was also a key factor, but it alone didn’t buy happiness:  the USA, with far and away the strongest per capita GDP, had an overall ranking of 18th. For more, see this World Economic Forum article or this one from the South China Morning Post.

We’ll be looking further into all of this (and much more) in the weeks to come.

[1] If you’ve been following this blog for awhile and the  name “Susskind” sounds familiar, a couple years ago, I blogged about the future and culture of the law, often citing the work of Richard Susskind, whose opus is pretty much the mother lode of crisp thinking about the law and technology. His equally brilliant son Daniel joined him in a book that also addressed other professions, which that series also considered. (Those blogs were collected in my book Cyborg Lawyers.) Daniel received a doctorate in economics from Oxford University, was a Kennedy Scholar at Harvard, and is now a Fellow in Economics at Balliol College, Oxford. Previously, he worked as a policy adviser in the Prime Minister’s Strategy Unit and as a senior policy adviser in the Cabinet Office.

[2] The panopticon architectural structure was the brainchild of legal philosopher Jeremy Bentham. For an introduction to the origins of his idea and its application to the digital age, see this article in The Guardian.

On the Third Hand…

robot workerWill the machines eventually monopolize the workplace? Ask economists, and you won’t get the rational analysis that traditional economic theory insists upon. Instead, you’ll get opinions that gravitate toward competing ideologies, reflecting individual cognitive, emotional, and political biases.

That’s certainly been the experience of Martin Fordentrepreneur, TED talker, and New York Times bestselling author of Rise of the Robots: Technology and the Threat of a Jobless Future:

“In the field of economics the opinions all too often break cleanly along predefined political lines. Knowing the ideological predisposition of a particular economist is often a better predictor of what that individual is likely to say than anything contained in the data under examination. In other words, if you’re waiting for the economists to deliver some sort of definitive verdict on the impact that advancing technology is having on the economy, you may have a very long wait.”[1]

polarized-thinking

In this Psychology Today article, Dr. Karl Albrecht[2] offers a neurological explanation for polarized thinking:

“Recent research suggests that our brains may be pre-wired for dichotomized thinking. That’s a fancy name for thinking and perceiving in terms of two – and only two – opposing possibilities.

“These research findings might help explain how and why the public discourse of our culture has become so polarized and rancorous, and how we might be able to replace it with a more intelligent conversation.

“[O]ur brains can keep tabs on two tasks at a time, by sending each one to a different side of the brain. Apparently, we toggle back and forth, with one task being primary and the other on standby.

“Add a third task, however, and one of the others has to drop off the to-do list.

“Scans of brain activity during this task switching have led to the hypothesis that the brain actually likes handling things in pairs. Indeed, the brain itself is subdivided into two distinct half-brains, or hemispheres.

two sides of the brain

“Curiously, part of our cranial craving for two-ness might be related to our own physiology: the human body is bilaterally symmetrical. Draw an imaginary center line down through the front of a person and you see a lot of parts (not all, of course), that come in pairs: two eyes, two ears, two nostrils, matching teeth on left and right sides, two shoulders, two arms, two hands, two nipples, two legs, two knees, and two feet. Inside you’ll find two of some things and one of others.

“Some researchers are now extending this reasoning to suggest that the brain has a built-in tendency, when confronted by complex propositions, to selfishly reduce the set of choices to just two. Apparently it doesn’t like to work hard.

“Considering how quickly we make our choices and set our opinions, it’s unlikely that all of the options will even be identified, never mind carefully considered.

On the one hand this, on the other hand that, we like to say.  Lawyers perfect the art.  Politics and the press /thrive on dichotomy:

“Again, our common language encodes the effect of this anatomical self reference. “On the one hand, there is X. But on the other hand, we have Y.” Many people describe political views as being either “left” or “right.”

“The popular press routinely constructs “news” stories around conflicts and differences between pairs of opposing people, factions, and ideologies. Bipolar conflict is the very essence of most of the news.”

So, are robots and artificially intelligence going to trash the working world, or not?

Hmmm, there might be another option — several, actually. Dr. Albrecht urges us to find them:

“Seek the ‘third hand’ – and any other ‘hands’ you can discover. Ask yourself, and others, ‘Are there other options to be considered?'”

We’ll consider some third hand perspectives about the rise of the robots in the coming weeks.

[1] Martin Ford is also the consulting expert for Societe Generale’s new “Rise of the Robots” investment index, which focuses on companies that are “significant participants in the artificial intelligence and robotics revolution.”

[2] According to his website, Karl Albrecht is “is an executive management consultant, futurist, lecturer, and author of more than 20 books on professional achievement, organizational performance, and business strategy. He is also a leading authority on cognitive styles and the development of advanced thinking skills. The Mensa Society honored him with its lifetime achievement award, for significant contributions by a member to the understanding of intelligence. Originally a physicist, and having served as a military intelligence officer and business executive, he now consults, lectures, and writes about whatever he thinks would be fun.”

Race Against the Machine Part 2

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

“Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.”

hawking musk gates

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended… side effects of well-intentioned social and economic policies”, the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy  side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left– blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll visit the shadowy side of the street again next time.

[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of  O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition who’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

“If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative… What this will empower is to turn this creativity into action.

“We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.”

Anderson sums it up this way:

“So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.”

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

David Lee TED talk

Like Sebastian Thrun, he’s no Pollyanna:  he understands that yes, technology threatens jobs:

“There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that ‘driver’ is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

“Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

“What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.”

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

“The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

“The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

“But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

“I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.”

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.