[ad_1]
Aldous Huxley’s dystopian masterpiece “Brave New World” was published more than 90 years ago. It was set well into the future, and the world’s population is separated into classes based primarily upon intelligence. The story’s protagonist, Bernard Marx, discovers the world outside of the class system. Contrasted with Jules Verne’s more optimistic novels about the future, Huxley’s “Brave New World” sees the negative aspects of a society enabled by technology. Huxley anticipated a number of developments, but he did not anticipate computers and artificial intelligence. I wonder how Huxley’s novel would have been different if he had.
AI’s role in the evolution of legal technology has been much discussed in recent months. Last month, I shared in “Are The Robots Finally Here” the game-changing potential of ChatGPT, including some potential applications, inherent limitations of the technology, and the potential for nefarious elements to use AI for negative purposes. After publication, a colleague of mine pointed to the phenomenon of AI Hallucinations, in which AI literally makes up answers including fictitious sources for its work. Dr. Lance Eliot at Stanford University is a renowned expert on AI who also writes about AI ethics, AI legal considerations and responsible AI. He has a simple approach to the ethical issues: there will be AI for Good and AI for Bad and a concept of Responsible AI.
How could Responsible AI and AI for Good help guide us toward a vision for the future along the lines of Jules Verne’s optimism — instead of Aldous Huxley’s dystopian tragedy?
Pure AI Has Limitations, But Hybrid AI Solutions Coupled With Human Input Can Win The Day
Pure AI solutions like ChatGPT will have limits unless they are augmented by another technology and overseen by humans. By Pure AI, I’m referring to the use of a singular AI technology, rather than a heterogenous mix of technologies that includes AI and is augmented with other rules or logic. When another technology is better for the task at hand, it should be used. I would call this Hybrid AI — applications where the AI is an important part of the technology but not the only technology.
It is unclear how many rules (if any) are embedded into ChatGPT right now. Does ChatGPT know to put legal disclaimers on its results? Did legal counsel at OpenAI trust the AI enough to let it “learn” about legal disclaimers and trust ChatGPT to make such disclaimers? Or are their rules and logic embedded to make sure that a legal disclaimer is included in any response that ChatGPT outputs that could be construed as legal advice? The answers to questions like these are unclear.
OpenAI could be made much stronger with a rules engine that can “box in” AI and force the application to apply prescriptive steps to its outputs. That would be very important for domains like legal, health, risk compliance, or science where there are facts that are known.
AI Can Leverage What Is Already Known
Think about it. There isn’t a need to depend upon AI to discover facts are already known. Something like the exact language of a codified law or the interaction of two prescription drugs does not need to be learned by AI. It is already known. When AI recognizes certain patterns, such as a legal question, the application should be directed to leverage rules and provide references to work that is authoritative, like a citation to a codified law.
Taking those steps raises another issue. How do we know that rules within an application aren’t biased or incorrect? What if the application applies the wrong rules? The reality is that we don’t always know those answers ourselves in the real world.
But rules and logic can be researched and verified if they are disclosed. This is true in the real world and within applications that provide transparency. Society generally trusts the legal system and clients generally trust attorneys and judges. The same is true for our healthcare system; it’s not perfect, but we tend to trust our doctors or hospitals to do right by us. And if there is a significant enough issue with a ruling or a health decision, there are review processes that enable a review.
For all the scariness out there with AI, as more Hybrid AI is developed that leverages facts and known information with rules, the future will be much brighter.
Here is a quick example of how Hybrid AI could be used to solve a legal problem. An attorney’s bot receives an inquiry via chat from a client. Throughout the course of the chat, client intake occurs and the bot concludes that the client’s issue is related to bankruptcy. As a Hybrid AI solution, the bot now begins to ask specific questions and applies known rules to complete existing forms that are specifically tied to bankruptcy filings. The attorney receives information and conducts a review of the entire process, clarifies anything that needs to be changed, communicates with the client, and puts her stamp of approval on the filing. An experience like this is not farfetched and could provide a faster, more efficient pathway for attorneys to work with their clients.
Who Will Benefit From Hybrid AI Approaches?
I’d like to think that society will, and I’m admittedly an optimist. Within the legal community, a Hybrid AI approach would benefit lawyers, law firms and law departments.
Legal information providers who are a trusted part of the legal community and produce authoritative materials will benefit. Lawyers and judges who rely upon these will benefit too.
No-code and low-code software platforms benefit because of their rules-based technology, as will any organization that has a repository of factual information to leverage. This can include law departments with their policy documents and contract lifecycle management solutions. It can also include law firms that have extensive databases of prior work product across all types of matters and clients.
There is a lot of talk about AI being disruptive and scary. AI will indeed be disruptive, but Hybrid AI has a lot of positive potential. Hybrid AI solutions can help leverage established facts and authoritative work that doesn’t have to be rediscovered by AI. If Hybrid AI applications can take off, there will still be new entrants in the legal industry — but many stakeholders will be able to embrace, benefit, and thrive from Hybrid AI technologies.
Unlike in Huxley’s “Brave New World,” there are reasons to believe the future of the legal industry holds the promise for even more positive possibilities. I believe AI for Good can triumph over AI for Bad.
Ken Crutchfield is Vice President and General Manager of Legal Markets at Wolters Kluwer Legal & Regulatory U.S., a leading provider of information, business intelligence, regulatory and legal workflow solutions. Ken has more than three decades of experience as a leader in information and software solutions across industries. He can be reached at ken.crutchfield@wolterskluwer.com.
[ad_2]
Source link