Work and Money

will work for food

He’s a gentleman with a family
A gentle man, living day to day
He’s a gentleman with pride, one may conclude
Sign reads, “Gentleman with a family will work for food.”

Manhattan Transfer, Gentleman With a Family

Norwegian Petter Amlie is an entrepreneur, technology consultant, and frequent contributor on Medium. Work runs our economy, he writes in a recent article, “but if future technology lets us keep our standard of living without it, why do we hold on to it?” It’s a good question — one of those obvious ones we don’t think to ask. Why would we insist on working for food — or the money we need to buy food — if we don’t have to?

As we’ve seen, at the center of the objections to robotics, artificial intelligence, big data, marketing algorithms, machine learning, and universal basic income is that they threaten the link between work and money. That’s upsetting because we believe jobs are the only way to “make a living.” But what if a day comes — sooner than we’d like to think — when that’s no longer true?

Work comes naturally to us, but the link between work and money is artificial — the function of an economic/social contract that relies on jobs to support both the production and consumption sides of the supply/demand curve:  we work to produce goods and services, we get paid for doing it, we use the money to buy goods and services from each other. If technology takes over the production jobs, we won’t get paid to produce things — then how are we supposed to buy them? Faced with that question, “the captains of industry and their fools on the hill” (Don Henley) generally talk jobs, jobs, jobs — or, in the absence of jobs, workfare.

John Maynard Keynes had a different idea back in 1930, just after the original Black Friday, when he predicted that technological progress would end the need for jobs, so that we would work for pay maybe fifteen hours per week, leaving us free to pursue nobler pursuits. He spoke in rapturous, Biblical terms:

“I see us free, therefore, to return to some of the most sure and certain principles of religion and traditional virtue–that avarice is a vice, that the exaction of usury is a misdemeanor, and the love of money is detestable, that those who walk most truly in the paths of virtue and sane wisdom who take least thought for the morrow. We shall once more value ends above means and prefer the good to the useful. We shall honour those who can teach us how to pluck the hour and the day virtuously and well, the delightful people who are capable of taking direct enjoyment in things, the lilies of the field who toil not neither do they spin.”

But then, after a second world war tore the planet apart, jobs rebuilt it. We’ve lived with that reality so long that we readily pooh-pooh Keynes’s euphoric prophecy. Amlie suggests we open our minds to it:

“Work and money are both systems we’ve invented that were right for their time, but there’s no reason to see them as universally unavoidable parts of society. They helped us build a strong global economy, but why would we battle to keep it that way, if societal and technological progress could help us change it?

“We have a built-in defense mechanism when the status quo is challenged by ideas such as Universal Basic Income, shorter work weeks and even just basic flexibility at the workplace, often without considering why we have an urge to defend it.

“You’re supposed to be here at eight, even if you’re tired. You’re supposedto sit here in an open landscape, even if the isolation of a home office can help you concentrate on challenging tasks. You have exactly X number of weeks to recharge your batteries every year, because that’s how it’s always been done.

“While many organizations have made significant policy adjustments in the last two decades, we’re still clinging to the idea that we should form companies, they should have employees that are paid a monthly sum to be there at the same time every morning five days a week, even if this system is not making us very happy.

“I do know that work is not something I necessarily want to hold on to, if I could sustain my standard of living without it, which may just be the case if robots of the future could supply us with all the productivity we could ever need. If every job we can conceive could be done better by a machine than a human, and the machines demand no pay, vacation or motivation to produce goods and services for mankind for all eternity, is it such a ridiculous thought to ask in such a society why we would need money?

“We should be exploring eagerly how to meet these challenges and how they can improve the human existence, rather than fighting tooth and nail to sustain it without knowing why we want it that way.

“The change is coming. Why not see it in a positive light, and work towards a future where waking up at 4 am to go to an office is not considered the peak of human achievement?”

One gentleman with a family who’s been seeing change in a positive new light is Juha Järvinen, one of 2,000 Finns selected for a two-year UBI test that just ended. He’s no longer working hard for the money, but he is working harder than ever.  We’ll meet him next time.

Fireflies and Algorithms


We’ve been looking at workfare — the legislated link between jobs and the social safety net. An article published last week  — Fireflies And Algorithms — The Coming Explosion Of Companies[1] brought the specter of workfare to the legal profession.

Reading it, my life flashed before my eyes, beginning with one particular memory:  me, a newly-hired associate, resplendent in my three-piece gray pinstripe suit, joining the 4:30 queue at the Secretary of State’s office, clutching hot-off-the-word-processor Articles of Incorporation and a firm check for the filing fee, fretting whether I’d get my copy time-stamped by closing time. We always had to file today, for reasons I don’t remember.

Entity choice and creation spanned transactional practice:  corporate, securities, mergers and acquisitions, franchising, tax, intellectual property, real property, commercial leasing….  The practice enjoyed its glory days when LLC’s were invented, and when a raft of new entity hybrids followed… well, that was an embarrassment of riches.

It was a big deal to set up a new entity and get it just right — make sure the correct ABC acquired the correct XYZ, draw the whole thing up in x’s and o’s, and finance it with somebody else’s money. To do all that required strategic alliances with brokers, planners, agents, promoters, accountants, investment bankers, financiers…. Important people initiated the process, and there was a sense of substantiality and permanence about it, with overtones of mahogany and leather, brandy and cigars. These were entities that would create and engage whole communities of real people doing real jobs to deliver real goods and services to real consumers. Dissolving an entity was an equally big deal, requiring somber evaluation and critical reluctance, not to mention more time-stamped paperwork.

Fireflies And Algorithms sweeps it all away — whoosh! just like that!– and describes its replacement:  an inhuman world of here-and-gone entities created and dissolved without the intent of all those important people or all that help from all those people in the law and allied businesses. (How many jobs are we talking about, I wonder — tens, maybe hundreds of thousands?) The new entities will do to choice of entity practice what automated trading did to the stock market, as described in this UCLA Law Review article:

“Modern finance is becoming an industry in which the main players are no longer entirely human. Instead, the key players are now cyborgs: part machine, part human. Modern finance is transforming into what this Article calls cyborg finance.”

In that “cyborg finance” world,

“[The “enhanced velocity” of automated, algorithmic trading] has shortened the timeline of finance from days to hours, to minutes, to seconds, to nanoseconds. The accelerated velocity means not only faster trade executions but also faster investment turnovers. “At the end of World War II, the average holding period for a stock was four years. By 2000, it was eight months. By 2008, it was two months. And by 2011 it was twenty-two seconds….

Fireflies And Algorithms says the business entity world is in for the same dynamic, and therefore we can expect:

“… what we’re calling ‘firefly companies’ — the blink-and-you-miss-it scenario brought about by ultra-short-life companies, combined with registers that remove records once a company has been dissolved, meaning that effectively they are invisible.”

Firefly companies are formed by algorithms, not by human initiative. Each is created for a single transaction — one contract, one sale, one span of ownership. They’re peer-reviewed, digitally secure, self-executing, self-policing, and trans-jurisdictional — all for free or minimal cost. And all of that is memorialized not in SOS or SEC filings but in blockchain.

“So what does all this mean?” the article asks:

“How do we make sense of a world where companies — which are, remember, artificial legal constructs created out of thin air to have legal personality — can come into existence for brief periods of time, like fireflies in the night, perform or collaborate on an act, and then disappear? Where there are perhaps not 300 million companies, but 1 billion, or 10 billion?”

Think about it. And then — if it hasn’t happened yet — watch your life flash before your eyes.

Or if not your life, at least your job. Consider, for example, a widely-cited 2013 study that predicted 57% of U.S. jobs could be lost to automation. Even if that prediction is only half true, that’s still a lot of jobs. And consider a recent LawGeex contest, in which artificial intelligence absolutely smoked an elite group of transactional lawyers:

“In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

“The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes.”

These developments significantly expand the pool of people potentially needing help through bad times. Currently, that means workfare. But how can you have workfare if technology is wiping out jobs?

More on that next time.

[1] The article was published by OpenCorporates, which according to its website is “the world’s largest open database of the corporate world and winner of the Open Data Business Award.”

Race Against the Machine

For the past several years, two MIT big thinkers[1] have been the go-to authorities in the scramble to explain how robotics, artificial intelligence, and big data are revolutionizing the economy and the working world. Their two books were published four and six years ago — so yesterday in the world of technology — but they were remarkably prescient when written, and have not diminished in relevance. They are:

Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2012)

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014)

Click here for a chapter-by-chapter digest of The Second Machine Age, written by an all star cast of economic commentators. Among other things, they acknowledge the authors’ view that neoliberal capitalism has not fared well in its dealings with the technological juggernaut, but in the absence of a better alternative, we might as well continue to ride the horse in the direction it’s going.

While admitting that History (not human choice) is “littered with unintended… side effects of well-intentioned social and economic policies”, the authors cite Tim O’Reilly[2] in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism.”

David Rotman, editor of the MIT Technology Review cites The Second Machine Age extensively in an excellent, longer article, “How Technology is Destroying Jobs.” Although the article is packed with contrary analysis and opinion, the following excepts emphasize what many might consider the shadowy  side of the street (compared to the sunny side we looked at in the past couple posts). I added the headings below to emphasize that many of the general economic themes we’ve been talking about also apply to the specific dynamics of the job market.

It used to be that economic growth — including wealth creation — also created more jobs. It doesn’t work that way any more. Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States.

For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

A rising economic tide no longer floats all boats. The result is a skewed allocation of the rewards of growth away from jobs — i.e., economic inequality. The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what ­Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades..

“[S]teadily rising productivity raised all boats for much of the 20th century,” [Brynjolfsson] says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States.

Meanwhile, technology is taking over the jobs that are left– blue collar, white collar, and even the professions. [I]mpressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared.

New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected.

We’ll visit the shadowy side of the street again next time.

[1] Erik Brynjolfsson is director of the MIT Center for Digital Business, and Andrew McAfee is a principal research scientist at MIT who studies how digital technologies are changing business, the economy, and society.

[2] According to his official bio on his website, Tim O’Reilly “is the founder and CEO of  O’Reilly Media, Inc. His original business plan was simply ‘interesting work for interesting people,’ and that’s worked out pretty well. O’Reilly Media delivers online learning, publishes books, runs conferences, urges companies to create more value than they capture, and tries to change the world by spreading and amplifying the knowledge of innovators.”

Gonna Be a Bright, Bright, Sunshiny Day

We met Sebastian Thrun last time. He’s a bright guy with a sunshiny disposition who’s not worried about robots and artificial intelligence taking over all the good jobs, even his own. Instead, he’s perfectly okay if technology eliminates most of what he does every day because he believes human ingenuity will fill the vacuum with something better. This is from his conversation with TED curator Chris Anderson:

“If I look at my own job as a CEO, I would say 90 percent of my work is repetitive, I don’t enjoy it, I spend about four hours per day on stupid, repetitive email. And I’m burning to have something that helps me get rid of this. Why? Because I believe all of us are insanely creative… What this will empower is to turn this creativity into action.

“We’ve unleashed this amazing creativity by de-slaving us from farming and later, of course, from factory work and have invented so many things. It’s going to be even better, in my opinion. And there’s going to be great side effects. One of the side effects will be that things like food and medical supply and education and shelter and transportation will all become much more affordable to all of us, not just the rich people.”

Anderson sums it up this way:

“So the jobs that are getting lost, in a way, even though it’s going to be painful, humans are capable of more than those jobs. This is the dream. The dream is that humans can rise to just a new level of empowerment and discovery. That’s the dream.”

Another bright guy with a sunshiny disposition is David Lee, Vice President of Innovation and the Strategic Enterprise Fund for UPS. He, too, shares the dream that technology will turn human creativity loose on a whole new kind of working world. Here’s his TED talk (click the image):

David Lee TED talk

Like Sebastian Thrun, he’s no Pollyanna:  he understands that yes, technology threatens jobs:

“There’s a lot of valid concern these days that our technology is getting so smart that we’ve put ourselves on the path to a jobless future. And I think the example of a self-driving car is actually the easiest one to see. So these are going to be fantastic for all kinds of different reasons. But did you know that ‘driver’ is actually the most common job in 29 of the 50 US states? What’s going to happen to these jobs when we’re no longer driving our cars or cooking our food or even diagnosing our own diseases?

“Well, a recent study from Forrester Research goes so far to predict that 25 million jobs might disappear over the next 10 years. To put that in perspective, that’s three times as many jobs lost in the aftermath of the financial crisis. And it’s not just blue-collar jobs that are at risk. On Wall Street and across Silicon Valley, we are seeing tremendous gains in the quality of analysis and decision-making because of machine learning. So even the smartest, highest-paid people will be affected by this change.

“What’s clear is that no matter what your job is, at least some, if not all of your work, is going to be done by a robot or software in the next few years.”

But that’s not the end of the story. Like Thrun, he believes that the rise of the robots will clear the way for unprecedented levels of human creativity — provided we move fast:

“The good news is that we have faced down and recovered two mass extinctions of jobs before. From 1870 to 1970, the percent of American workers based on farms fell by 90 percent, and then again from 1950 to 2010, the percent of Americans working in factories fell by 75 percent. The challenge we face this time, however, is one of time. We had a hundred years to move from farms to factories, and then 60 years to fully build out a service economy.

“The rate of change today suggests that we may only have 10 or 15 years to adjust, and if we don’t react fast enough, that means by the time today’s elementary-school students are college-aged, we could be living in a world that’s robotic, largely unemployed and stuck in kind of un-great depression.

“But I don’t think it has to be this way. You see, I work in innovation, and part of my job is to shape how large companies apply new technologies. Certainly some of these technologies are even specifically designed to replace human workers. But I believe that if we start taking steps right now to change the nature of work, we can not only create environments where people love coming to work but also generate the innovation that we need to replace the millions of jobs that will be lost to technology.

“I believe that the key to preventing our jobless future is to rediscover what makes us human, and to create a new generation of human-centered jobs that allow us to unlock the hidden talents and passions that we carry with us every day.”

More from David Lee next time.

If all this bright sunshiny perspective made you think of that old tune, you might treat yourself to a listen. It’s short, you’ve got time.

And for a look at a current legal challenge to the “gig economy” across the pond, check out this Economist article from earlier this week.

Learning to Learn

“I didn’t know robots had advanced so far,” a reader remarked after last week’s post about how computers are displacing knowledge workers. What changed to make that happen? The machines learned how to learn. This is from Artificial Intelligence Goes Bilingual—Without A Dictionary, Science Magazine, Nov. 28, 2017.

“Imagine that you give one person lots of Chinese books and lots of Arabic books—none of them overlapping—and the person has to learn to translate Chinese to Arabic. That seems impossible, right?” says… Mikel Artetxe, a computer scientist at the University of the Basque Country (UPV) in San Sebastiàn, Spain. “But we show that a computer can do that.”

Most machine learning—in which neural networks and other computer algorithms learn from experience—is “supervised.” A computer makes a guess, receives the right answer, and adjusts its process accordingly. That works well when teaching a computer to translate between, say, English and French, because many documents exist in both languages. It doesn’t work so well for rare languages, or for popular ones without many parallel texts.

[This learning technique is called] unsupervised machine learning. [A computer using this technique] constructs bilingual dictionaries without the aid of a human teacher telling them when their guesses are right.

Hmmm… I could have used that last year, when my wife and I spent three months visiting our daughter in South Korea. The Korean language is ridiculously complex; I never got much past “good morning.”

Alpha Go match

Go matches were a standard offering on the gym TV’s where I worked out in Seoul. (Imagine two guys in black suits staring intently at a game board — not exactly a riveting workout visual.) Like the Korean language, Go is also ridiculously complex, and mysterious, too:  the masters seem to make moves more intuitively than analytically. But the days of human Go supremacy are over. Google wizard and overall overachiever Sebastian Thrun[1] explains why in this conversation with TED Curator Chris Anderson:

sebastian thrun TED

“Artificial intelligence and machine learning is about 60 years old and has not had a great day in its past until recently. And the reason is that today, we have reached a scale of computing and datasets that was necessary to make machines smart. The new thing now is that computers can find their own rules. So instead of an expert deciphering, step by step, a rule for every contingency, what you do now is you give the computer examples and have it infer its own rules.

“A really good example is AlphaGo. Normally, in game playing, you would really write down all the rules, but in AlphaGo’s case, the system looked over a million games and was able to infer its own rules and then beat the world’s residing Go champion. That is exciting, because it relieves the software engineer of the need of being super smart, and pushes the burden towards the data.

“20 years ago the computers were as big as a cockroach brain. Now they are powerful enough to really emulate specialized human thinking. And then the computers take advantage of the fact that they can look at much more data than people can. AlphaGo looked at more than a million games.  No human expert can ever study a million games. So as a result, the computer can find rules that even people can’t find.”

Thrun made those comments in April 2017. AlphaGo’s championship reign was short-lived:  six months later it lost big to a new cyber challenger that taught itself without reviewing all that data. This is from AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help, MIT Technology Review, October 18, 2017.

AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.

Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.

The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.

“The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind [the creators of AlphaGo Zero].

“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.”

Did you catch that? “We’ve removed the constraints of human knowledge.” Wow. No wonder computers are elbowing all those knowledge workers out of the way.

What’s left for human to do? We’ll hear from Sebastian Thrun and others on that topic next time.

[1] Sebastian Thrun’s TED bio describes him as “an educator, entrepreneur and troublemaker. After a long life as a professor at Stanford University, Thrun resigned from tenure to join Google. At Google, he founded Google X, home to self-driving cars and many other moonshot technologies. Thrun also founded Udacity, an online university with worldwide reach, and Kitty Hawk, a ‘flying car’ company. He has authored 11 books, 400 papers, holds 3 doctorates and has won numerous awards.”

The Future of Law (23):  The Future Couldn’t Wait III

I tried to end this series three weeks ago, but the future keeps arriving, and I keep wanting to tell you about it. I realize that just because it’s news to me doesn’t mean it’s news, and this week’s topic is a case in point:  it was analyzed in this law journal article three years ago.

“This article is dedicated to highlighting the coming age of Quantitative Legal Prediction with hopes that practicing lawyers, law students and law schools will take heed and prepare to survive (thrive) in this new ordering. Simply put, most lawyers, law schools and law students are going to have to do more to prepare for the data driven future of this industry. In other words, welcome to Law’s Information Revolution and yeah – there is going to be math on the exam.”

“Quantitative Legal Prediction” is noteworthy because it encompasses several developments we’ve been talking about:

The above all come together in Ravel Law, as described a couple weeks ago in The Lawyerist:

“We hear a lot of talk about “big data” and how it will drive law practice in the future. In theory, someday you will have every bit of relevant practice data at your fingertips and you will be able to use that to predict how a judge will rule on a case, have computers crunch through discovery, and realistically predict the cost of litigation. That someday is getting closer and closer, particularly with tools like Ravel.

“At its most advanced, Ravel also offers judge analytics, where you can see patterns about how judges rule and what ideas and people influence those judges. That type of analysis could be incredibly helpful in making decisions about settlement, deciding who should argue a case, whether to strike a judge, and how to approach your pretrial motion practice.”

The National Law Review said this about Ravel Law last winter:

“Data analytics and technology has been used in many different fields to predict successful results.

“Having conducted metrics-based research and advocacy while at the Bipartisan Policy Center, and observing how data-driven decision making was being used in areas like baseball and politics, [Ravel Law founder Daniel Lewis] was curious why the legal industry had fallen so far behind. Even though the legal field is often considered to be slow moving, there are currently over 11 million opinions in the U.S. judicial system with more than 350,000 new opinions issued per year. There is also a glut of secondary material that has appeared on the scene in the form of legal news sources, white papers, law blogs and more. Inspired by technology’s ability to harness and utilize vast amounts of information, Daniel founded Ravel Law to accommodate the dramatically growing world of legal information.

“Ravel’s team of PhDs and technical advisors from Google, LinkedIn, and Facebook, has coded advanced search algorithms to determine what is relevant, thereby enhancing legal research’s effectiveness and efficiency.

“Ravel provides insights, rather than simply lists of related materials, by using big data technologies such as machine learning, data visualization, advanced statistics and natural language processing.”

Not surprisingly, Ravel Law has worked closely with law students to develop and market itself:

“We work with schools because students are always the latest generation and have the highest expectations about how technology should work for them.” Students have given the Ravel team excellent feedback and have grown into a loyal user base over the past few years. Once these students graduate, they introduce Ravel to their firms.”

Ravel Law offers data visualization/mapping. For an article on why you should care, see this Above the Law article from a couple days ago.