Eliminating AI hallucinations in legal research?


Stanford University received both attention and criticism for its recent report on generative AI legal research tools from LexisNexis and Thomson Reuters. The report discusses hallucinations, which is the main fear of attorneys using AI now. Lawyers need to understand AI hallucinations, obviously. But we also need to understand something called RAG because this is the thing that's supposed to reduce (or, ideally, eliminate) hallucinations. Can RAG do that for legal research? That's what...