Thomson Reuters White Paper: The Future Is Right here – It’s Simply Not Evenly Distributed

Editorial Team
10 Min Read


AI right here, AI there, AI in every single place. That appears to be the development. However are we keen to cede good lawyer abilities to a bot? That appears to be a danger in line with a white paper from Thomson Reuters.

There’s a well-known quote attributed to the science fiction author William Gibson: “The longer term is already right here — it’s simply not evenly distributed.” The white paper demonstrates this very level: AI is eroding essential pondering abilities at an alarming fee. The longer term will probably be distributed to those that work out methods to retain and improve these abilities.

The Paper

The white paper amplifies a troubling development that I’ve mentioned earlier than: AI is eroding attorneys’ essential pondering abilities. Studying the paper confirms what many, together with me, have feared: “As AI turns into extra succesful, attorneys danger changing into much less so.” With out these essential pondering abilities, a lawyer merely can’t train analytical abilities to identification and outline authorized issues, a lot much less discover options.

The paper was written by Valerie McConnell, Thomson Reuters VP of options engineering and former litigator and Lance Odegard, Thomson Reuters director of legaltech platform providers.

The Present Menace

The findings ought to scare the hell out of seasoned attorneys:

The headline? Analysis from the SBS Swiss Enterprise College discovered important correlations between AI use and cognitive offloading on the one hand and a scarcity of essential pondering on the opposite. Vital pondering down, cognitive offloading up. 

McConnell says that “cognitive muscle tissues can atrophy when attorneys change into too depending on automated evaluation.” Odegard provides an much more regarding truth: AI is totally different than earlier applied sciences given its velocity and depth. And the truth that it will possibly carry out some cognitive duties creates a higher danger of overreliance on it.

I just lately attended a panel dialogue of legislation librarians on using AI of their legislation corporations. One telling comment: extra skilled attorneys had been capable of type higher prompts as a result of they understood and will higher articulate the issue than much less skilled ones. They usually might shortly decide whether or not the output was bogus: when it didn’t look or sound fairly proper. They received these abilities by way of creating a essential mind-set from seeing patterns and prior experiences. AI brief circuits and replaces the pattern-recognition experiences.

The traditional instance of that is the place the AI instrument explains a authorized idea with certainty however the rationalization doesn’t not look proper to an skilled lawyer who has handled that idea and understands how and why it was developed.

The Accelerated Dangers Of Agentic AI

However there’s extra hazard forward in line with the paper. Agentic AI can understand its atmosphere, plan and execute advanced multistep workflows, make real-time choices and adapt methods, and proactively pursue objectives, all with out human enter. This implies, in line with the paper, that agentic AI might intensify cognitive offloading. In different phrases, we flip off our brains and let AI do the pondering for us. And as mentioned earlier than, we don’t have a clue how it’s doing all this.

McConnell and Odegard imagine agentic AI creates “unprecedented skilled duty challenges.” How can attorneys ethically supervise the techniques? What ranges of competency will we anticipate and demand from human attorneys? How will attorneys ethically talk with shoppers about methods developed by the “black field”? Attorneys have an moral obligation to clarify the dangers and advantages of strategic choices: how can we do this when these dangers and advantages are developed in methods we don’t perceive?

I just lately wrote in regards to the phenomenon of authorized tech corporations shopping for legislation corporations and the hazard of a diminished lawyer within the loop. Agentic AI magnifies these risks considerably.

Do We Want Vital Considering?

As with all “truism” it’s at all times helpful to pause and replicate whether or not it’s actually a truism: how a lot will future attorneys even want essential pondering abilities when AI can do it for them?

McConnell and Odegard actually imagine that future attorneys will want these abilities. They imagine that AI can’t replicate these abilities, nor can it but exchange the creativity and nuanced understanding of human lawyer.

I agree with them on this level. I see it regularly as AI spits out options as if handed down from above. And it sticks to its weapons even when mistaken. The truth that the instruments are really easy and fast to make use of additionally makes it fairly tempting to only settle for what it says with out pondering it over. That is particularly the case for busy attorneys. 

And that’s one purpose we’re persevering with to see hallucinated instances cited in briefs and even judicial opinions.

However what occurs after we depend on the bot as a substitute of our personal instincts borne out of expertise? Just a few years in the past, I trusted the dealing with of a big listening to to native counsel. The day earlier than the listening to, I received the sensation after speaking to the native counsel that one thing was not fairly proper. So, I shortly hopped on a airplane and went to the listening to myself. Good factor: the native counsel didn’t present and despatched a first-year affiliate to deal with the essential listening to. I doubt a bot would have picked up that nuance.

The Dangers For Future Generations

McConnell and Odegard additionally cite the hazard of overreliance on AI to switch these abilities will erode youthful lawyer growth. It might lead to attorneys relying an excessive amount of on AI as a substitute of pondering for themselves. It might lead to “attorneys expert at managing AI however missing impartial strategic pondering.” 

I too have mentioned this very actual drawback. Doing what many name scut work as a younger lawyer was boring and tedious, however it helped you start to see patterns that could possibly be useful later in related circumstances. 

However now we’re urged to dump these duties right into a chatbot and overlook it. The lead to 10 years? Minds stuffed with mush. The outdated notion of pondering like a lawyer could also be changed by pondering like a bot.

One other hazard: the erosion of authorized schooling. In keeping with the paper “college students more and more arrive with diminished essential pondering abilities as a result of pre-law AI publicity whereas anticipating to make use of AI instruments all through their careers.” If we don’t take steps to disrupt that expectation, we are able to ensure that these college students, once they change into attorneys, will proceed to make use of AI instruments in precisely the identical manner.

Can The Dangers Be Managed?

To be truthful, McConnell and Odegard imagine these dangers can all be managed by accountable use of current AI instruments. That could be true however as with most expertise, some attorneys and authorized professionals will work out how to do that and change into future superstars. Many won’t. And possibly that’s OK since many authorized jobs and work executed by people will probably be changed by AI. 

Definitely, AI will permit attorneys and authorized professionals to do the high-end stuff for which they had been skilled. However let’s be actual right here: there’s not sufficient demand for the high-end work to go round. And lots of attorneys and authorized professionals are usually not that good at it. 

The Future: It Received’t Be Evenly Distributed

So, wish to put together for the long run? Determine methods to encourage and develop essential pondering abilities amongst your work power within the age of AI. Determine what to do when the one work to be executed is high-end pondering. Which means making ready for a legislation agency that appears very totally different from at this time. 

Prepare for the long run, it’s not going to be evenly distributed.


Stephen Embry is a lawyer, speaker, blogger, and author. He publishes TechLaw Crossroads, a weblog dedicated to the examination of the strain between expertise, the legislation, and the follow of legislation

Share This Article