AI Legal Failure
June 14, 2023A recent news story getting lots of traction is some lawyers who used ChatGPT to perform their legal research and draft their presentations. They didn’t review ChatGPTs work and submitted it without sufficient review. ChatGPT had hallucinated and the cases it referenced did not exist. Not only did the lawyers lose their case they are now in trouble and may lose their legal license.
I believe that this event should more properly be viewed as a system or process development failure rather than an AI failure.
These lawyers followed no development approach yet they were undertaing a high risk development of a new business process. There were many opportunities to have implmented a process that would have met their needs yet they did nothing to create such a processs.
What was done wrong
- They developed a new process without designing it, testing it or validating it.
- The language model they used as not trained on court cases. They could have trained a model on court cases or get someone else to do that but didn’t
- They could have used the LLM to analyze legal documents efficiently and effectively but didn’t
- Operated an AI system autonomously, most challenging, without implementing sw guidelines and monitoring
They really did nothing correct.
Conclusion
- Don’t develop new processes unless you know what you are doing.
- It probably is good that bad press for AI is generated