There’s No Such Thing as a Free Lunch — True or False?

free lunch - mIlton friedman

free lunch - steven hawking

We can assume that the pros and cons of a universal basic income (UBI) have been thoroughly researched and reasonably analyzed, and that each side holds its position with utmost conviction.

We can also assume that none of that reasonableness and conviction will convert anyone from one side to the other, or win over the uncommitted. Reason doesn’t move us:  we use it to justify what we already decided, based on what we believe. SeeWhy Facts Don’t Change Our Minds,” The New Yorker (February 2017) and “This Article Won’t Change Your Mind,” The Atlantic (March 2017).

History doesn’t guide us either — see Why We Refuse to Learn From History, from Big Think and Why Don’t We Learn From History, from military historian Sir Basil Henry Liddell Hart. The latter is full of conventional wisdom:

“The most instructive, indeed the only method of learning to bear with dignity the vicissitude of fortune, is to recall the catastrophes of others.

“History is the best help, being a record of how things usually go wrong.

“There are two roads to the reformation for mankind— one through misfortunes of their own, the other through the misfortunes of others; the former is the most unmistakable, the latter the less painful.

“I would add that the only hope for humanity, now, is that my particular field of study, warfare, will become purely a subject of antiquarian interest. For with the advent of atomic weapons we have come either to the last page of war, at any rate on the major international scale we have known in the past, or to the last page of history.

Good advice maybe, but we’ve heard it before and besides, most of us would rather make our own mistakes.

If reasoned analysis and historical perspective don’t inform our responses to radically new ideas like UBI, then what does? Many things, but cultural belief is high on the list. Policy is rooted in culture, culture is rooted in shared beliefs, and beliefs are rooted in history. Cultural beliefs shape individual bias, and the whole belief system becomes sacred in the culture’s mythology. Try to subvert cultural beliefs, and the response is outrage and entrenchment.

All of which means that each of us probably had a quick true or false answer to the question in this week’s blog post title, and were ready to defend it with something that sounded reasonable. Our answer likely signals our knee jerk response to the idea of UBI. The “free lunch”– or, more accurately, “free money” — issue appears to be the UBI Great Divide:  get to that point, and you’re either pro or con, and there’s no neutral option. (See this for more about where the “no free lunch” phrase came from.[1])

The Great Divide is what tanked President Nixon’s UBI legislation. The plan, which would have paid a family of four $1,600/year (equivalent to $10,428 today) was set to launch in the midst of an outpouring of political self-congratulation and media endorsement, only to be scuttled by a memo from a White House staffer that described the failure of a British UBI experiment 150 years earlier. UBI apparently was in fact a free lunch, with no redeeming social purpose; thus its fate was sealed.

As it turns out, whether the experiment  failed or not was lost in a 19th Century fog of cultural belief which enabled opponents of the experiment to pounce on a bogus report about its impact to justify passing the Poor Law Amendment Act of 1834 — which is what they wanted to do anyway. The new Poor Law was that era’s version of workfare, and was generated by the worst kind of scarcity mentality applied to the worst kind of scarcity. Besides creating the backdrop to Charles Dickens’ writing, the new Poor Law’s philosophical roots still support today’s welfare system:

“The new Poor Law introduced perhaps the most heinous form of ‘public assistance’ that the world has ever witnessed. Believing the workhouses to be the only effective remedy against sloth and depravity, the Royal Commission forced the poor into senseless slave labor, from breaking stones to walking on treadmills….”

From “The Bizarre Tale Of President Nixon’s Basic Income Plan.”

If UBI is a free lunch, then it’s an affront to a culture that values self-sufficiency. If it isn’t, then it requires a vastly different cultural value system to support it. The former believes that doing something — “making a living” at a job — is how you earn your daily bread. The latter believes you’re entitled do sustenance if you are something:  i.e., a citizen or member of the nation, state, city, or other institution or community providing the UBI. The former is about activity, the latter is about identity. This Wired article captures the distinction:

“The idea [of UBI] is not exactly new—Thomas Paine proposed a form of basic income back in 1797—but in this country, aside from Social Security and Medicare, most government payouts are based on individual need rather than simply citizenship.”

UBI is about “simply citizenship.” It requires a cultural belief that everybody in the group shares its prosperity.  Cultural identity alone ensures basic sustenance — it’s a right, and that right makes Poor Laws and workfare obsolete.

The notion of cultural identity invites comparison between UBI and the “casino money” some Native American tribes pay their members. How’s that working? We’ll look at that next time.

[1] Yes, Milton Friedman did in fact say it, although he wasn’t the only one. And in a surprising twist, he has been criticized for advocating his own version of UBI.

Old Dog, Old Trick, New Showtime

old dog new trick

Blockchain consultant and futurist Michael Spencer called it a conspiracy by the 0.01 percenters to enslave the rest of us for good.[1] A growing number of those 0.01 percenters have already supported it, but they’re not alone:  this poll conducted shortly after the 2016 election showed that half of Americans supported it as well. A parade of think tanks (here’s one) and other professional skeptics (more than I can cite with hyperlinks in a single sentence) have given it a thorough vetting and mostly concluded something along the lines of “yeah well okay maybe it’s worth a try.”

What is “it”? This idea:  give the poor what they lack — money. Ensure everyone a livable income while getting rid of the expensive and draconian welfare system. And just to be fair, go ahead and give everyone else money, too, even the billionaires.

The idea mostly goes by the name “universal basic income” (UBI). It’s rooted in the futuristic fear that technology will eventually put humans out of work. That’s not an old fear:  UBI is “far from a new idea,” says Martin Ford, another Silicon Valley entrepreneur and a popular TED talker, in his New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future.

“In the context of the contemporary American political landscape… a guaranteed income is likely to be disparaged as ‘socialism’ and a massive expansion of the welfare state. The idea’s historical origins, however, suggest something quite different. While a basic income has been embraced by economists and intellectuals on both sides of the political spectrum, the idea has been advocated especially forcefully by conservatives and libertarians.

“Friedrich Hayek, who has become an iconic figure among today’s conservatives, was a strong proponent of the idea. In his three-volume work. Law, Legislation and  Liberty, published between 1973 and 1979, Hayek suggested that a guaranteed income would be a legitimate government policy designed to provide against adversity, and that the need for this type of safety net is the direct result of the transition to a more open and mobile society where many individuals can no longer rely on traditional support systems:

‘There is, however, yet another class of common risks with regard to which the need for government action has until recently not been generally admitted…. The problem here is chiefly the fate of those who for various reasons cannot make their living in the market… that is, all people suffering from adverse conditions which may affect anyone and against which most individuals cannot alone make adequate protection but in which a society that has reached a certain level of wealth can afford to provide for all.’”

LBJ foresaw the possibility of massive technological unemployment back in the 60’s, and appointed an “Ad Hoc Committee on the Triple Revolution” to study the topic. The Committee included co-Nobel Prize winners Friedrich Hayek and Swedish economist and sociologist Gunnar Myrdal.[2] Rise of the Robots describes the Committee’s findings:

‘Cybernation’ (or automation) would soon result in an economy where ‘potentially unlimited output can be achieved by systems of machines which will require little cooperation from human beings.’ The result would be massive unemployment, soaring inequality, and, ultimately, falling demand for goods and services as consumers increasingly lacked the purchasing power necessary to continue driving economic growth.

“The Ad Hoc Committee went on to propose a radical solution:  the eventual implementation of a guaranteed minimum income made possible by the ‘economy of abundance’ such widespread automation would create, and which would ‘take the place of the patchwork of welfare measures’ that were then in place to address poverty.

“The Triple Revolution report was released to the media and sent to President Johnson, the secretary of labor, and congressional leaders in March 1964. An accompanying cover letter warned ominously that if something akin to the report’s proposed solutions was not implemented, ‘the nation will be thrown into unprecedented economic and social disorder.’ A front-page story with extensive quotations from the report appeared in the next day’s New York Times, and numerous other newspapers and magazines ran stories and editorials (most of which were critical), in some cases even printing the entire text of the report.

“The Triple Revolution marked what was perhaps the crest of a wave of worry about the impact of automation that had arisen following World War II. The specter of mass joblessness as machines displaced workers had incited fear many times in the past — going all the way back to Britain’s Luddite uprising in 1812 — but in the 1950s the ‘60s, the concern was especially acute and was articulated by some of the United States’ most prominent and intellectually capable individuals.

“Four months after the Johnson administration received the Triple Revolution report, the president signed a bill creating the National Commission on Technology, Automation, and Economic Progress. In his remarks at the bills signing ceremony, Johnson said that ‘automation can be the ally of our prosperity if we will just look ahead, if we will understand what is to come, and if we will set our course wisely after  proper planning for the future.’ The newly formed Commission then … quickly faded into obscurity.”

A few years later, Richard Nixon introduced UBI legislation that he called “The most significant piece of social legislation in our nation’s history.” That legislation also faded into obscurity– more on that another time.

UBI is an old idea responding to an old fear:  how do we make a living if we can’t work for it? A half century after LBJ and Nixon, that fear is all too real, and lots of people think it might be time for the historical UBI solution to make its appearance.

But not everyone is jumping on the UBI bandwagon. The very thought that jobs might not be the source of our sustenance is the rallying cry of UBI’s most strident opponents.

More on UBI next time.

[1] Spencer followed with a similarly scathing assessment in this article.

[2] Myrdal’s study of race relations was influential in Brown v. Board of Education. He was also an architect of the Swedish social democratic welfare state. Hayek and Myrdal were jointly awarded the Nobel Prize in Economics in 1974.

Fireflies and Algorithms

fireflies

We’ve been looking at workfare — the legislated link between jobs and the social safety net. An article published last week  — Fireflies And Algorithms — The Coming Explosion Of Companies[1] brought the specter of workfare to the legal profession.

Reading it, my life flashed before my eyes, beginning with one particular memory:  me, a newly-hired associate, resplendent in my three-piece gray pinstripe suit, joining the 4:30 queue at the Secretary of State’s office, clutching hot-off-the-word-processor Articles of Incorporation and a firm check for the filing fee, fretting whether I’d get my copy time-stamped by closing time. We always had to file today, for reasons I don’t remember.

Entity choice and creation spanned transactional practice:  corporate, securities, mergers and acquisitions, franchising, tax, intellectual property, real property, commercial leasing….  The practice enjoyed its glory days when LLC’s were invented, and when a raft of new entity hybrids followed… well, that was an embarrassment of riches.

It was a big deal to set up a new entity and get it just right — make sure the correct ABC acquired the correct XYZ, draw the whole thing up in x’s and o’s, and finance it with somebody else’s money. To do all that required strategic alliances with brokers, planners, agents, promoters, accountants, investment bankers, financiers…. Important people initiated the process, and there was a sense of substantiality and permanence about it, with overtones of mahogany and leather, brandy and cigars. These were entities that would create and engage whole communities of real people doing real jobs to deliver real goods and services to real consumers. Dissolving an entity was an equally big deal, requiring somber evaluation and critical reluctance, not to mention more time-stamped paperwork.

Fireflies And Algorithms sweeps it all away — whoosh! just like that!– and describes its replacement:  an inhuman world of here-and-gone entities created and dissolved without the intent of all those important people or all that help from all those people in the law and allied businesses. (How many jobs are we talking about, I wonder — tens, maybe hundreds of thousands?) The new entities will do to choice of entity practice what automated trading did to the stock market, as described in this UCLA Law Review article:

“Modern finance is becoming an industry in which the main players are no longer entirely human. Instead, the key players are now cyborgs: part machine, part human. Modern finance is transforming into what this Article calls cyborg finance.”

In that “cyborg finance” world,

“[The “enhanced velocity” of automated, algorithmic trading] has shortened the timeline of finance from days to hours, to minutes, to seconds, to nanoseconds. The accelerated velocity means not only faster trade executions but also faster investment turnovers. “At the end of World War II, the average holding period for a stock was four years. By 2000, it was eight months. By 2008, it was two months. And by 2011 it was twenty-two seconds….

Fireflies And Algorithms says the business entity world is in for the same dynamic, and therefore we can expect:

“… what we’re calling ‘firefly companies’ — the blink-and-you-miss-it scenario brought about by ultra-short-life companies, combined with registers that remove records once a company has been dissolved, meaning that effectively they are invisible.”

Firefly companies are formed by algorithms, not by human initiative. Each is created for a single transaction — one contract, one sale, one span of ownership. They’re peer-reviewed, digitally secure, self-executing, self-policing, and trans-jurisdictional — all for free or minimal cost. And all of that is memorialized not in SOS or SEC filings but in blockchain.

“So what does all this mean?” the article asks:

“How do we make sense of a world where companies — which are, remember, artificial legal constructs created out of thin air to have legal personality — can come into existence for brief periods of time, like fireflies in the night, perform or collaborate on an act, and then disappear? Where there are perhaps not 300 million companies, but 1 billion, or 10 billion?”

Think about it. And then — if it hasn’t happened yet — watch your life flash before your eyes.

Or if not your life, at least your job. Consider, for example, a widely-cited 2013 study that predicted 57% of U.S. jobs could be lost to automation. Even if that prediction is only half true, that’s still a lot of jobs. And consider a recent LawGeex contest, in which artificial intelligence absolutely smoked an elite group of transactional lawyers:

“In a landmark study, 20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

“The study, carried out with leading legal academics and experts, saw the LawGeex AI achieve an average 94% accuracy rate, higher than the lawyers who achieved an average rate of 85%. It took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI. The longest time taken by a lawyer to complete the test was 156 minutes, and the shortest time was 51 minutes.”

These developments significantly expand the pool of people potentially needing help through bad times. Currently, that means workfare. But how can you have workfare if technology is wiping out jobs?

More on that next time.

[1] The article was published by OpenCorporates, which according to its website is “the world’s largest open database of the corporate world and winner of the Open Data Business Award.”

The Rentier Economy: A Primer (Part 2)

My plan for this week’s post was to present further data about the extent of the rentier economy and then provide a digest of articles for further reading.

Turns out that wasn’t so easy. The data is there, but it’s mostly buried in categories like corporate capitalization, profits, and market concentration. Extracting it into blog post sized nuggets wasn’t going to be that easy.

Further, the data was generally only footnoted in a maelstrom of worldwide commentary. Economists and journalists treated it as a given, barely worthy of note, and were much more interested in revealing, analyzing, and debating what it means. The resulting discourse spans the globe — north to south, east to west, and all around the middle — and there is widespread agreement on the basics:

  • Economic thinking has traditionally focused on income from profits generated from the sale of goods and services produced by human labor. In this model, as profits rise, so do wages.
  • Beginning in the 1980’s, globalization began moving production to cheap labor offshore.
  • Since the turn of the millennium, artificial intelligence and robotics have eliminated jobs in the developed world at a pace slowed only by the comparative costs of technology vs. human labor.
  • As a result, lower per unit costs of production have generated soaring profits while wages have stagnated in the developed world. I.e., the link between higher profits and higher wages no longer holds.

Let’s pause for a moment, because that point is huge. Erik Brynjolfsson, director of the MIT Center for Digital Business, and Andrew McAfee, principal research scientist at MIT, wrote about it in their widely cited book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014). The following is from a chapter-by-chapter digest  written by an all-star cast of economists:

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress.

On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

Okay, point made. Let’s move on to the rest of the rentier story:

  • These trends have been going on the past four decades, but increased in velocity since the 2007-2009 Recession. The result has been a shift to a new kind of job market characterized by part-time, on-demand, contractual freelance positions that pay less and don’t offer fringe benefits. Those who still hold conventional jobs with salaries and benefits are a dying breed, and probably don’t even realize it.
  • As non-wage earner production has soared, so have profits, resulting in a surplus of corporate cash. Low labor costs and technology have created a boom in corporate investment in patents and other rentable IT assets.
  • Rent-seeking behavior has been increasingly supported by government policy — such as the “regressive regulation” and other “legalized monopoly” dynamics we’ve been looking at in the past few weeks.
  • The combination of long-term wage stagnation and spiraling rentier profits has driven economic inequality to levels rivaled only by pre-revolutionary France, the Gilded Age of the Robber Barons, and the Roaring 20’s.
  • Further, because the rentier economy depends on government policy, it is particularly susceptible to plutocracies, oligarchies, “crony-capitalism,” and other forms of corruption, leading to public mistrust in big business, government, and the social/economic elite.
  • These developments have put globalization on the defensive, resulting in reactionary politics such as populism, nationalism, authoritarianism, and trade protectionism.

As you see, my attempt to put some numbers to the terms “rent” and “rentier” led me straight into some neighborhoods I’ve been trying to stay out of in this series. Finding myself there reminded me of my first encounter with the rentier economy nine years ago, when of course I had no idea that’s what I’d run into. I was at a conference of entrepreneurs, writers, consultants, life coaches, and other optimistic types. We started by introducing ourselves from the microphone at the front of the room. Success story followed success story, then one guy blew up the room by telling how back in the earliest days of the internet, he and Starbucks’ Howard Schultz spent $250K buying up domain names for the biggest corporations and brand names. Last year, he said, he made $76 Million from selling or renting them back.

He was a rentier, and I was in the wrong room. When it was my turn at the mic, I opened my mouth and nothing came out. Welcome to the real world, my idealistic friend.

As it turns out, following the rentier pathway eventually leads us all the way through the opinionated commentary and current headlines to a much bigger worldwide issue. We’ll go there next time.

Eric and Kevin’s Most Excellent Career Adventures

thermos

 

lunch bucket

 

David Graeber’s book Bullshit Jobs is loaded with real-life job stories that meet his definition of “a form of employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence even though the employee feels obliged to pretend that this is not the case.” One of those stories rang a bell:  turns out that “Eric” and I had the same job. The details are different, but our experiences involved the same issues of social capital and upward mobility.

Eric grew up in a working class neighborhood, left to attend a major British university, graduated with a history major, landed in a Big 4 accounting firm training program, and took a corporate position that looked like an express elevator to the executive suite. But then the job turned out to be… well, nothing. No one would tell him what to do. He showed up day after day in his new business clothes and tried to look busy while trying in vain to solve the mystery of why he had nothing to do. He tried to quit a couple times, only to be rewarded with raises, and the money was hard to pass up. Frustration gave way to boredom, boredom to depression, and depression to deception. Soon he and his mates at the pub back home hatched a plan to use his generous expense account to travel. gamble, and drink.

In time, Eric learned that his position was the result of a political standoff:  one of the higher-ups had the clout to fund a pet project that the responsible mid-level managers disagreed with, so they colluded to make sure it would never happen. Since Eric had been hired to coordinate internal communication on the project, keeping him in the dark was essential. Eventually he managed to quit, kick his gambling and drinking habits, and take a shot at the artistic career he had envisioned in college.

My story isn’t quite so… um, colorful… but the themes are similar. I also came from a strong “work with your hands” ethic and was in the first generation of my family to go to college, where I joined the children of lawyers, neurosurgeons, professors, diplomats, and other upper echelon white collar professionals from all 50 states and several foreign countries, At the first meeting of my freshmen advisory group, my new classmates talked about books, authors, and academic disciplines I’d never heard of. When I tackled my first class assignment, I had to look up 15 words in the first two pages. And on it went. Altogether, my college career was mostly an exercise in cluelessness. But I was smart and ambitious, and did better than I deserved.

Fast forward nine years, and that’s me again, this time signing on with a boutique corporate law firm as a newly minted MBA/JD. I got there by building a lot of personal human capital, but my steel thermos and metal lunch bucket upbringing was still so ingrained that a few weeks after getting hired I asked a senior associate why nobody ever took morning and afternoon coffee breaks. He looked puzzled, and finally said, “Well… we don’t really take breaks.” Or vacations, evenings, weekends, or holidays, as it turned out.

A couple years later I hired on with a Big 4 accounting firm as a corporate finance consultant. My first assignment was my Eric-equivalent job:  I was assigned to a team of accountants tasked with creating a new chart of accounts for a multinational corporation and its subsidiaries. Never mind that the job had nothing to do with corporate finance…. Plus there were two other little problems:  I didn’t know what a chart of accounts was, and at our first client meeting a key corporate manager announced that he thought the project was ridiculous and intended to oppose it. Undaunted, the other members of the consulting team got to work. Everybody seemed to know what to do, but nobody would tell me, and in the meantime our opponent in management gained a following.

As a result, I spent months away from home every week, trying to look busy. I piled up the frequent flyer miles and enjoyed the 5-star accommodations and meals, but fell into a deep depression .When I told the managing partner about it, he observed that, “Maybe this job isn’t a good fit for you.” He suggested I leave in two months, which happened to be when our consulting contract was due for a renewal. Looking back, I suspect my actual role on the team was “warm body.”

Graeber says that, at first blush, Eric’s story sounds like yet one more bright, idealistic liberal arts grad getting a real-world comeuppance:

“Eric was a young man form a working-class background.. fresh out of college and full of expectations, suddenly confronted with a jolting introduction to the “real world.”

“One could perhaps conclude that Eric’s problem was not just that he hadn’t been sufficiently prepared for the pointlessness of the modern workplace. He had passed through the old educational system … This led to false expectations and an initial shock of disillusionment that he could not overcome.”

Sounds like my story, too, but then Graeber takes his analysis in a different direction:  “To a large degree,” he say, “this is really a story about social class.” Which brings us back to the issues of upward mobility and social capital we’ve been looking. We’ll talk more about those next time.

In the meantime, I can’t resist a Dogbert episode:

Dilbert

Rolling the Rock:  Lessons From Sisyphus on Work, Working Out, and Life

sisyphus

Here’s a link to my latest LinkedIn Pulse article:  Rolling the Rock:  Lessons From Sisyphus on Work, Working Out, and Life.  In it, I talk about a key psycho-neurological function known as “the pleasure of being the cause.”  As I say in the article, “The conversation is going to get philosophical, but it will be worth it. So get yourself a cup, close the door, turn off the ringer, take a breath. This won’t be spin. It’s based on good ideas from smart people.”

Enjoy!

 

 

The Fatal Flaw

Hollywood-Sign-at-Night

A few years ago I wrote a screenplay that did okay in a contest. I made a couple trips to Burbank to pitch it, got no sustained interest, and gave up on it. Recently, someone who actually knows what he’s doing encouraged me to revise and re-enter it.

Inside storyAmong other things, he introduced me to Inside Story:  The Power of the Transformational Arc, by Dara Marks (2007). The book describes what the author calls “the essential story element” — which, it turns out, is remarkably apt not just for film but for life in general, and particularly for talking about economics, technology, and the workplace.

No kidding.

What is it?

Dara Marks calls it “The Fatal Flaw.” This is from the book:

First, it’s important to recap or highlight the fundamental premise on which the fatal flaw is based:

Because change is essential for growth, it is a mandatory requirement for life.

If something isn’t growing and developing, it can only be headed toward decay and death.

There is no condition of stasis in nature. Nothing reaches a permanent position where neither growth nor diminishment is in play.

As essential as change is, most of us resist it, and cling rigidly to old survival systems because they are familiar and “seem” safer. In reality, if an old, obsolete survival system makes us feel alone, isolated, fearful, uninspired, unappreciated, and unloved, we will reason that it’s easier to cope with what we know that with what we haven’t yet experienced. As a result, most of us will fight to sustain destructive relationships, unchallenging jobs, unproductive work, harmful addictions, unhealthy environments, and immature behavior long after there is any sign of life or value to them.

This unyielding commitment to old, exhausted survival systems that have outlived their usefulness, and resistance to the rejuvenating energy of new, evolving levels of existence and consciousness is what I refer to as the fatal flaw of character:

The Fatal Flaw is a struggle within a character
to maintain a survival system
long after it has outlived its usefulness.

As it is with screenwriting, so it is with us as we’re reckoning with the wreckage of today’s collision among economics, technology, and the workplace. We’re like the character who must change or die to make the story work:  our economic survival is at risk, and failure to adapt is fatal. Faced with that prospect, we can change our worldview, or we can wish we had. Trouble is, our struggle to embrace a new paradigm is as perilous as holding to an old one.

What’s more, we will also need to reckon with two peculiar dynamics of our time:  “echo chambers” and “epistemic bubbles.” The following is from an Aeon Magazine article published earlier this week entitled “Escape The Echo Chamber”:

Something has gone wrong with the flow of information. It’s not just that different people are drawing subtly different conclusions from the same evidence. It seems like different intellectual communities no longer share basic foundational beliefs. Maybe nobody cares about the truth anymore, as some have started to worry. Maybe political allegiance has replaced basic reasoning skills. Maybe we’ve all become trapped in echo chambers of our own making – wrapping ourselves in an intellectually impenetrable layer of likeminded friends and web pages and social media feeds.

But there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.

Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.

That’s what we’re up against. We’ll plow fearlessly ahead in our examination of new economic models next time.