Burnout at the Top:  Trust in the Age of Artificial Intelligence

Fire

The late Paul Rawlinson, former Global Chair of Baker McKenzie, left a multifaceted  career legacy:

“Rawlinson, an intellectual property lawyer, achieved a number of triumphs in his professional career, including becoming the first British person to lead the global firm as chairman and overseeing a run of outstanding financial growth during his tenure.

“But a key part of Rawlinson’s legacy is also his public decision to step down from the chairman’s role in October, citing “medical issues caused by exhaustion.” He and his firm’s relative openness about the reasons for taking leave helped stimulate a wider discussion about the mental and physical stresses of the profession.”

Baker McKenzie Chairman Helped Erode Taboos About Attorney HealthThe American Lawyer (April 15, 2019)

Inspired by Rawlinson’s decision to step down, several other similarly-situated leaders went public with their own struggles.[1] Among their stressors was the challenge of how to lead their firms to meet the commercial demands of an era when artificial intelligence has already established its superiority over human efforts in legal research, due diligence, and discovery.[2] It’s not just about efficiency, it’s about the erosion of a key aspect of the attorney-client relationship:  trust. As Rawlinson wrote last year:

‘‘‘The robots are coming’. It’s fast becoming the mantra of our age. And it comes with more than a hint of threat. I’ve noticed especially in the last year or so the phrase has become the go-to headline in the legal news pages when they report on technology in our industry.

“For our profession – where for thousands of years, trust, diligence and ‘good judgement’ have been watchwords – the idea of Artificial Intelligence ‘replacing’ lawyers continues to be controversial. From law school and all through our careers we are taught that the Trusted Advisor is what all good lawyers aspire to become.

“The fundamental issue is trust. Our human instinct is to want to speak to a human. I don’t think that will change. Trust is what we crave, it’s what separates us from machines; empathy, human instinct, an ability to read nuances, shake hands, and build collaborative relationships.”

Will Lawyers Become Extinct In The Age Of Automation? World Economic Forum (Mar. 29, 2018)

Rawlinson acknowledged that clients are often more concerned with efficiency than preserving the legal profession’s historical trust-building process, demanding instead that “lawyers harness AI to make sure we can do more with less… Put simply, innovation isn’t about the business of law, it’s about the business of business.” As a result, Rawlinson’s goal was to find ways his firm could “use AI to augment, not replace, judgement and empathy.”

Speaking from the client point of view, tech entrepreneur and consultant William H. Saito also weighed in on the issue of trust in an AI world.

“As homo sapiens (wise man), we are ‘wise’ compared to all other organisms, including whales and chimpanzees, in that we can centralize control and make a large number of people believe in abstract concepts, be they religion, government, money or business. .. This skill of organizing people around a common belief generated mutual trust that others would adhere to the belief and its goals.”

“Looking back at our progress as a species, we can distinguish several kinds of trust that have evolved over time.

“There is the ability to work together and believe in others, which differentiates us from other animals, and which took thousands of years to develop;

“trust associated with money, governments, religion and business, which took hundreds of years;

“trust associated with creating the “bucket brigade” of passing packets of data between unfamiliar hosts that is the internet, which took decades; and

“network trust that has enabled new business models over the past few years.

“Not only is this rate of change accelerating by an order of magnitude, but the paradigm shifts have completely disrupted the prior modes of trust.”

This Is What Will Keep Us Human In The Age Of AI, World Economic Forum (Aug. 4, 2017)

Rawlinson asked, “will lawyers become extinct?” Saito asked, “Are we humans becoming obsolete?” Both men wrote from a globalized perspective on big policy issues, and the stress of facing them took its toll. Rawlinson’s case of burnout was ultimately terminal. As for Saito, a fter writing his article on trust, he was discredited for falsifying his resume — something he clearly didn’t need to do, given his remarkable credentials. That he would do so seems appropriate to his message, which was that trust in the AI age is not about human dependability, instead it’s about cybersecurity. I.e., in the absence of human judgment and collaboration, your technology had better be impeccable.

Most of us don’t live at the rarified level of those two men. We live where trust still means “empathy, human instinct, an ability to read nuances, shake hands, and build collaborative relationships.”

Or, as my daughter summed it up when I told her about this article, “Buy local, trust local.”

Photo by Ricardo Gomez Angel on Unsplash.

[1] On May 12, 2019, The American Lawyer introduced a year-long initiative Minds Over Matters: A Yearlong Examination of Mental Health in the Legal Profession “to more deeply cover stress, depression, addiction and other mental health issues affecting the legal profession.”

[2] It’s also changing appellate practice, which makes it easy to predict we’ll soon see AI court opinions.

Race Against the Machine Part 2

Rational choice theory is a cornerstone of conventional economic thinking. It states that:

“Individuals always make prudent and logical decisions. These decisions provide people with the greatest benefit or satisfaction — given the choices available — and are also in their highest self-interest.”

hawking musk gates

Presumably Stephen Hawking, Elon Musk, and Bill Gates had something like this in mind when they published an open letter in January 2015 urging that artificial intelligence R&D should focus “not only on making AI more capable, but also on maximizing the societal benefit,” To execute on this imperative, they urged an interdisciplinary collaboration among “economics, law and philosophy. computer security, formal methods and, of course, various branches of AI itself.” (Since its release, the letter has garnered another 8.000 signatures — you can sign it, too, if you like.)

The letter’s steady, rational four paragraphs praise how technology has benefited the human race, and anticipate more of the same in the future, but its reception and the authors’ comments in other contexts are not so measured. As a result, the letter has become a cheering section for those who think humanity is losing its race against the robots.

Consider, for example, the following from an Observer article:

“Success in creating AI would be the biggest event in human history,” wrote Stephen Hawking in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”

Elon Musk called the prospect of artificial intelligence “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Mr. Musk cites his decision to invest in the Artificial Intelligence firm, DeepMind, as a means to “just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there.”

Microsoft co-founder Bill Gates has also expressed concerns about Artificial Intelligence. During a Q&A session on Reddit in January 2015, Mr. Gates said, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Or consider this Elon Musk comment in Vanity Fair:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

In other words, Hawking, Gates, and Musk aren’t just worried about machines taking over jobs, they’re worried about the end of the world — or at least the human race. This Washington Post op-ed piece thinks that might not be such a bad thing:

When a technology is so obviously dangerous — like nuclear energy or synthetic biology — humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential. While it’s scary, sure, that humans may no longer be the smartest life forms in the room a generation from now, should we really be that concerned? Seems like we’ve already done a pretty good job of finishing off the planet anyway. If anything, we should be welcoming our AI masters to arrive sooner rather than later.

Or consider this open letter written back to Hawking, Gates, and Musk, which basically says forget the fear mongering — it’s going to happen no matter what you think:

Progress is inevitable, even if it is reached by accident and happenstance. Even if we do not intend to, sentient AI is something that will inevitably be created, be it through the evolution of a learning AI, or as a byproduct of some research. No treaty or coalition can stop it, no matter what you think. I just pray you do not go from educated men to fear mongers when it happens.

As usual, we’re at an ideological impasse, with both sides responding not so much according to the pros and cons but according to their predispositions. This article suggests a way through the impasse:

At the beginning of this article, we asked if the pessimists or optimists would be right.

There is a third option, though: one where we move from building jobs around processes and tasks, a solution that is optimal for neither human nor machine, to building jobs around problems.

The article is long, well-researched, and… well, very rational. Too bad — conventional thinking aside — other research shows we rarely act from a rational outlook when it comes to jobs and the economy… or anything else for that matter.

More on that next time.